AI Revolution - except you’ve been leaving under a rock, chances are that you’ve stumbled across these words before. Wherever you turn there’s always something about AI. It’s either being used to create realistic graphics, compose music or video, write codes, create mind-blowing designs, answer questions on just about anything, write essays, and solve math problems.
The list is endless and it’s no wonder why there’s so much ado about the idea of AI taking over the world. Interestingly, most articles about the AI revolution make it sound like an apocalyptic nightmare that we have to “survive”. And with tech moguls like Elon Musk talking about how AI will take over the world and how dangerous it can be, there’s a mix of fear and anticipation.
Educators have to work twice as hard to ensure their students did their homework themselves. Content creators are constantly being reminded of how their services will become obsolete in the near future. And just so you know, these aren’t the only professionals whose jobs are either being made more difficult or threatened by AI.
We are still learning about the potential of AI and chances are that in a few years, AI will be much better than it is now, and this idea is indeed frightening. But it’s not that bad. Yes, AI is a revolutionary technology but it’s not the first time we have seen this kind of thing happen.
There has always been a revolutionary technology in history that posed a threat to an established system in one way or the other. But that’s a conversation for another day. In this article, let's talk about the AI revolution, what it is, where we are presently, and what the future holds.
Read Also: How To Harness The Power Of AI in Your Startup
The word artificial intelligence has quickly become a household term. But the concept of AI is nothing new. It has been around since 1956 when it was first coined by John Marcathy at the Dartmouth Summer Research Project on Artificial Intelligence. Marcathy had brought together key researchers to the conference which was intended to showcase the potential of AI technology.
It worked, and a few years later, AI was already becoming popular, especially with the development of computers capable of storing information (a big deal at the time). The government began funding AI research through agencies such as DARPA. Despite all these, progress slowed considerably because computers were still ages behind in raw processing power.
By 1997 a lot had changed. Computers were getting smarter and more powerful. Deep Blue, one of the earliest supercomputers ever made was able to defeat World Chess Champion Garry Kasparov not once, not twice, but three times, blowing away pundits like Maurice Ashley, who before the game had said, “there was no way that this tin box was going to defeat a reigning world champion”. Deep blue had forced many not to give up on the idea that computers would someday best humans.
Fast forward to the 2000s, and by now the AI revolution was underway. But it was not until 2006 that a breakthrough happened with the release of Nvidia CUDU, a programming language that transformed GPUs into supercomputers. You see, Nvidia is well known for manufacturing powerful GPUs (graphic processing units) which were mostly used in the gaming industry. But CUDU allowed AI researchers to tap into the immense processing power of these GPUs.
Here is a timeline of everything that unfolded afterward;
2009 - ImageNet was introduced by Stanford AI researchers. Imagenet is a large database of images for training computer algorithms on image recognition.
2012 - AlexNet the world’s first visual classifier was created using GPU-powered CNNs (Convolutionary neural networks) and more than one million images from ImageNet Database. AlexNet was able to classify images into 1000 categories.
CNNs were great for image processing but they were limited in that they can not process natural language. That is to say, they did not understand commands spoken or written in the normal human language (like English).
Although Natural Language Processing (NLP) as a discipline has been in existence for years, little progress had been made when it concerns practical application. Researchers came up with different NLP models, but these models were only effective with short sentences and struggled with longer and more complex sentences.
However, they were still useful and had practical applications. One example is Alphago, an artificial intelligence that defeated the World Class Go Champion Lee Sedol in 2016. Making it the second time that an AI had defeated humans in their own game.
So we had the data and the computing power, all that was left was the ability to train computers to understand and process natural language. This last piece of the puzzle was finally solved in 2017 when Google released the Transformer, a new language processing model several orders of magnitude better than the previous models (that is the CNNs and the RNNs)
In an article published by Google the company stated that “On top of higher translation quality, the Transformer requires less computation to train and is a much better fit for modern machine learning hardware, speeding up training by up to an order of magnitude.”
This was the big break that AI researchers have been looking for and from this moment onwards AI research grew in leaps and bounds first with the release of GPT 2018. GPT (which stands for Generative Pre-Trained Transformer) is a deep learning language model that could generate text, summarise text, answer questions, and have exceptional reading comprehension without “task-specific training”.
GPT2 was a more advanced version of GPT and was trained on 40 GB of internet text and 1.5 billion parameters. But both GPT and GPT2 pale in comparison to the GPT3 which was trained with over 45 terabytes of data, has 175 billion parameters, and generates about 4.5 billion words daily. GPT-3 is the foundation for over 300 AI-powered applications including ChatGPT.
ChatGPT is an AI developed by OPenAI and first launched in November 2022. It is basically a chatbot, that is, a program designed to hold a conversation in a way that seems human-like. But its ability to process natural language and communicate effectively is not the only reason why this AI has become so popular.
Another reason is that ChatGPT, powered by the GPT-3 language model, has been trained with billions of internet data making it capable of providing answers to just about any question. Additionally, each time the AI interacts with humans, the developers use the feedback provided to fine-tune the AI.
Considering that ChatGPT is the fastest-growing app ever with 1 million users in the first week of release and about 100 million users within two months, it is easy to see just how much data is being generated. This means in a few months, the AI will be capable of doing much more.
And speaking of, here are the different ways you can use ChatGPT (which is available for free at the moment)
It goes without saying that ChatGPT can be used for most activities that can be easily automated, is conversational, and requires a prompt and detailed response. But don’t get too excited, ChatGPT is not infallible. There are certain limitations (mostly technical) that you should know of if you plan on using AI.
ChatGPT is definitely the most impressive language processing AI right now. But it is not without its limitations. Yes, the AI has a few rough edges that need to be ironed out. Here are some limitations that we have been able to gather so far.
The more people use ChatGPT, the more they get to experience some of its limitations. But this is expected because the AI has not yet been released as a finished product. Rather, it is at the feedback stage and each feedback will be used to improve the AI. It’s just like I said, in a few months, ChatGPT will be capable of much more and because of its immense potential, the AI is nicknamed “Google Killer”.
Read Also: Everything You Need to Know About AI Marketing
Big tech firms are currently in the race to dominate the AI industry. The top two contenders are Microsoft and Google. However, Microsoft is currently taking the lead thanks to ChatGPT. The company had invested 3 billion in OpenAI and with the success of ChatGPT, Microsoft’s investment finally paid off and the company quickly announced plans to invest an additional 10 billion dollars in OpenAI, a move that would give Microsoft a 49 percent stake in the company. This investment will help to further research in AI.
Microsoft plans to use ChatGPT to supercharge its search engine BING, which means that Google is in trouble. Google responded by unveiling its artificial intelligence known as BARD but this act was a bit too late and hurried. BARD is designed with LaMDA technology (Language Model for Dialogue Applications) which uses the same basic transformer principle as ChatGPT. However, Google’s AI made a factual error that cost the company 100 billion dollars, driving its share 9% lower.
Despite this, there’s no doubt that Google will pick itself up and continue the race for AI supremacy. The question now is who will be the first to deliver a fully working version of an AI-powered search engine. Although the world has its eyes on Google and Microsoft, these aren’t the only companies that are staking their future on AI. Neither are these the only companies that are using AI to transform the way they do business.
Take Viable for example, This AI-powered platform helps companies understand their customers through insights derived from the customer’s feedback. The insights are drawn from surveys, help desk tickets, chat logs, reviews, sales call transcripts, market research, and any other sources of information from the customer and summarised in a matter of seconds.
These insights can then be used to increase the net promoter score (NPS), reduce support ticket volumes, improve product time-to-market, and reduce operating costs through automated feedback analysis.
This is just one example that proves how valuable AI technology can be and if we have achieved this level of success in just a few years, imagine what AI will be capable of in the next 5 to 10 years. Indeed, the future of AI is an interesting one.
AI Revolution - except you’ve been leaving under a rock, chances are that you’ve stumbled across these words before. Wherever you turn there’s always something about AI. It’s either being used to create realistic graphics, compose music or video, write codes, create mind-blowing designs, answer questions on just about anything, write essays, and solve math problems.
The list is endless and it’s no wonder why there’s so much ado about the idea of AI taking over the world. Interestingly, most articles about the AI revolution make it sound like an apocalyptic nightmare that we have to “survive”. And with tech moguls like Elon Musk talking about how AI will take over the world and how dangerous it can be, there’s a mix of fear and anticipation.
Educators have to work twice as hard to ensure their students did their homework themselves. Content creators are constantly being reminded of how their services will become obsolete in the near future. And just so you know, these aren’t the only professionals whose jobs are either being made more difficult or threatened by AI.
We are still learning about the potential of AI and chances are that in a few years, AI will be much better than it is now, and this idea is indeed frightening. But it’s not that bad. Yes, AI is a revolutionary technology but it’s not the first time we have seen this kind of thing happen.
There has always been a revolutionary technology in history that posed a threat to an established system in one way or the other. But that’s a conversation for another day. In this article, let's talk about the AI revolution, what it is, where we are presently, and what the future holds.
Read Also: How To Harness The Power Of AI in Your Startup
The word artificial intelligence has quickly become a household term. But the concept of AI is nothing new. It has been around since 1956 when it was first coined by John Marcathy at the Dartmouth Summer Research Project on Artificial Intelligence. Marcathy had brought together key researchers to the conference which was intended to showcase the potential of AI technology.
It worked, and a few years later, AI was already becoming popular, especially with the development of computers capable of storing information (a big deal at the time). The government began funding AI research through agencies such as DARPA. Despite all these, progress slowed considerably because computers were still ages behind in raw processing power.
By 1997 a lot had changed. Computers were getting smarter and more powerful. Deep Blue, one of the earliest supercomputers ever made was able to defeat World Chess Champion Garry Kasparov not once, not twice, but three times, blowing away pundits like Maurice Ashley, who before the game had said, “there was no way that this tin box was going to defeat a reigning world champion”. Deep blue had forced many not to give up on the idea that computers would someday best humans.
Fast forward to the 2000s, and by now the AI revolution was underway. But it was not until 2006 that a breakthrough happened with the release of Nvidia CUDU, a programming language that transformed GPUs into supercomputers. You see, Nvidia is well known for manufacturing powerful GPUs (graphic processing units) which were mostly used in the gaming industry. But CUDU allowed AI researchers to tap into the immense processing power of these GPUs.
Here is a timeline of everything that unfolded afterward;
2009 - ImageNet was introduced by Stanford AI researchers. Imagenet is a large database of images for training computer algorithms on image recognition.
2012 - AlexNet the world’s first visual classifier was created using GPU-powered CNNs (Convolutionary neural networks) and more than one million images from ImageNet Database. AlexNet was able to classify images into 1000 categories.
CNNs were great for image processing but they were limited in that they can not process natural language. That is to say, they did not understand commands spoken or written in the normal human language (like English).
Although Natural Language Processing (NLP) as a discipline has been in existence for years, little progress had been made when it concerns practical application. Researchers came up with different NLP models, but these models were only effective with short sentences and struggled with longer and more complex sentences.
However, they were still useful and had practical applications. One example is Alphago, an artificial intelligence that defeated the World Class Go Champion Lee Sedol in 2016. Making it the second time that an AI had defeated humans in their own game.
So we had the data and the computing power, all that was left was the ability to train computers to understand and process natural language. This last piece of the puzzle was finally solved in 2017 when Google released the Transformer, a new language processing model several orders of magnitude better than the previous models (that is the CNNs and the RNNs)
In an article published by Google the company stated that “On top of higher translation quality, the Transformer requires less computation to train and is a much better fit for modern machine learning hardware, speeding up training by up to an order of magnitude.”
This was the big break that AI researchers have been looking for and from this moment onwards AI research grew in leaps and bounds first with the release of GPT 2018. GPT (which stands for Generative Pre-Trained Transformer) is a deep learning language model that could generate text, summarise text, answer questions, and have exceptional reading comprehension without “task-specific training”.
GPT2 was a more advanced version of GPT and was trained on 40 GB of internet text and 1.5 billion parameters. But both GPT and GPT2 pale in comparison to the GPT3 which was trained with over 45 terabytes of data, has 175 billion parameters, and generates about 4.5 billion words daily. GPT-3 is the foundation for over 300 AI-powered applications including ChatGPT.
ChatGPT is an AI developed by OPenAI and first launched in November 2022. It is basically a chatbot, that is, a program designed to hold a conversation in a way that seems human-like. But its ability to process natural language and communicate effectively is not the only reason why this AI has become so popular.
Another reason is that ChatGPT, powered by the GPT-3 language model, has been trained with billions of internet data making it capable of providing answers to just about any question. Additionally, each time the AI interacts with humans, the developers use the feedback provided to fine-tune the AI.
Considering that ChatGPT is the fastest-growing app ever with 1 million users in the first week of release and about 100 million users within two months, it is easy to see just how much data is being generated. This means in a few months, the AI will be capable of doing much more.
And speaking of, here are the different ways you can use ChatGPT (which is available for free at the moment)
It goes without saying that ChatGPT can be used for most activities that can be easily automated, is conversational, and requires a prompt and detailed response. But don’t get too excited, ChatGPT is not infallible. There are certain limitations (mostly technical) that you should know of if you plan on using AI.
ChatGPT is definitely the most impressive language processing AI right now. But it is not without its limitations. Yes, the AI has a few rough edges that need to be ironed out. Here are some limitations that we have been able to gather so far.
The more people use ChatGPT, the more they get to experience some of its limitations. But this is expected because the AI has not yet been released as a finished product. Rather, it is at the feedback stage and each feedback will be used to improve the AI. It’s just like I said, in a few months, ChatGPT will be capable of much more and because of its immense potential, the AI is nicknamed “Google Killer”.
Read Also: Everything You Need to Know About AI Marketing
Big tech firms are currently in the race to dominate the AI industry. The top two contenders are Microsoft and Google. However, Microsoft is currently taking the lead thanks to ChatGPT. The company had invested 3 billion in OpenAI and with the success of ChatGPT, Microsoft’s investment finally paid off and the company quickly announced plans to invest an additional 10 billion dollars in OpenAI, a move that would give Microsoft a 49 percent stake in the company. This investment will help to further research in AI.
Microsoft plans to use ChatGPT to supercharge its search engine BING, which means that Google is in trouble. Google responded by unveiling its artificial intelligence known as BARD but this act was a bit too late and hurried. BARD is designed with LaMDA technology (Language Model for Dialogue Applications) which uses the same basic transformer principle as ChatGPT. However, Google’s AI made a factual error that cost the company 100 billion dollars, driving its share 9% lower.
Despite this, there’s no doubt that Google will pick itself up and continue the race for AI supremacy. The question now is who will be the first to deliver a fully working version of an AI-powered search engine. Although the world has its eyes on Google and Microsoft, these aren’t the only companies that are staking their future on AI. Neither are these the only companies that are using AI to transform the way they do business.
Take Viable for example, This AI-powered platform helps companies understand their customers through insights derived from the customer’s feedback. The insights are drawn from surveys, help desk tickets, chat logs, reviews, sales call transcripts, market research, and any other sources of information from the customer and summarised in a matter of seconds.
These insights can then be used to increase the net promoter score (NPS), reduce support ticket volumes, improve product time-to-market, and reduce operating costs through automated feedback analysis.
This is just one example that proves how valuable AI technology can be and if we have achieved this level of success in just a few years, imagine what AI will be capable of in the next 5 to 10 years. Indeed, the future of AI is an interesting one.