Sign up
Sign In
Sign up
Sign In
DataDrivenInvestor
Jan 1
Save
In 2023, AI is no longer hidden away behind office building walls, IT systems, or user interfaces in apps. AI has moved its way into the hearts and minds of entire populations, including those people without an interest in tech.
Last year, OpenAI’s text-to-image model DALL-E 2 took the internet by storm. Followed by the equally impressive Midjourney and the open-source model Stable Diffusion. Then LensaAI selfies flooded social media feeds. But it was OpenAI’s ChatGPT that deeply penetrated AI into the collective awareness. Now, anyone with an internet connection can easily access and leverage the raw computer intelligence of a highly advanced chatbot without costs.
ChatGPT is a fine-tuned version of GPT-3, which is a general-purpose language model released by OpenAI in 2020. GPT-3 didn’t make a commercial breakthrough in the same way ChatGPT did as it was only released in private beta through a waiting list (I never gained access to it). And then it was exclusively licensed to Microsoft. GPT-3 is trained on much more data and has many more parameters inside of it compared to ChatGPT. The entirety of articles on Wikipedia was included in GPT-3’s training set, and it only accounted for about 0,5%.
Powerful and impressive as GPT-3 is, it is also racist and terrible. It has a tendency to indulge in bias and discrimination towards race, gender, and religion, spew profanities, make racist jokes, condone terrorism, and accuse people of being rapists. Compared to GPT-3, ChatGPT is impressively good at stirring around controversial topics, refuses to respond to inappropriate prompts, and admits when it doesn’t know the answer to a tricky question.
ChatGPT is still far from AGI (Artificial General Intelligence). The holy grail for AI researchers. OpenAI’s CEO, Sam Altman, says ChatGPT doesn’t even come close.
In short, AGI marks a point when AI achieves a level of autonomy and can outperform humans across different domains. Imagine a single AI model that could beat you in chess, write emails for you at work, and drive your car home. We have domain-specific AIs that could individually do well in each of the tasks, but not one super-intelligence which could manage all of them at will. AGI is still far away. Or is it?
In the summer of 2022, public debates on AGI gained steam after Washington Post reported that Google Engineer Blake Lemoine believed Google’s chatbot LaMDA had come alive. Google responded that there was no evidence to Lemoine’s claims and fired him soon after. But judge for yourself, here is his conversations with it.
Speaking of AGI, GPT-4 is rumored to be released in the spring of 2023. Nothing substantial has been revealed about it yet, but expectations are sky-high. The few people who have tried it, are not allowed to talk about it due to NDAs, but allegedly it should be just as exciting a leap as GPT-3.
The (human) AI writer, Gary Marcus, is certain that minds will be blown when GPT-4 is finally released, and it will be bigger and better than any of the impressive things we have seen from OpenAI so far. However, Marcus predicts that GPT-4 will still suffer from some of the same mistakes as GPT-3 because it’s essentially built on the same architecture. Marcus predicts for example, that GPT-4 will still make “shake-your-head stupid errors”, mix truth with lies, hallucinate, lack trustworthiness, and be unreliable.
We can probably expect GPT-4 to be the world’s best “bull-shitter”. Capable of imitating human thought to a near-perfect extent. Yet, it still lacks common sense. “The dark matter of intelligence” as described by computer scientist Yejin Choi:
A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that’s what was there in the physical world — and just that. It turns out that’s only 5 percent of the universe (..) It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it.
“Common sense” is developed in humans by bodily experiences, e.g. we don’t put our hands on a hot stove because we know it will hurt. AIs are “by nature” unable to put two and two together like this, simply because they can’t experience the world as we can. Therefore, we can expect GPT-4 to be impressive, but not to be autonomous or sentient, not to be Frankenstein’s monster or a rocket ship, but a great source of entertainment and wonder.
app.ddichat.com


empowerment through data, knowledge, and expertise. subscribe to DDIntel at https://ddintel.datadriveninvestor.com
Viv Bora
Ranjeet Menon
Paul Stastny
in
Information and Communication Technology Council | Conseil des technologies de l’information et des communications
DataBridge Market Research
Jennifer Aue
in
IBM Design
Margot Wynant
Temo Giorgadze
Imoh Etim
in
Axum Labs
AboutHelpTermsPrivacy
I write articles about tech, business, life, and career. Sign up to my newsletter on www.futuristiclawyer.substack.com
Help
Status
Writers
Blog
Careers
Privacy
Terms
About
Text to speech

source