ML winter

Share on:

Is the next winter in computer science coming?

You may hear the term “AI winter” dropped here and there. For those that are older, we can remember when the hype of AI had peaked before and the period of nothing right after it.

AI has been a concept for many decades in computer science. Originally is was based around concepts like encoding all the worlds knowledge in logical structures inside inference engines, or having code that would modify and write new portions of itself. There were some interesting uses cases for each of the different methods, but none of them was able to deal with real world.

Processing inputs, images, audio, even text was difficult and the researchers realized they didn’t have the tools to deal with any version of the real world no matter how well their AI system worked once things were neatly encoded. When the decision makers realized that there was no bridge between AI and the real world they cut funding. The result was a dark age for AI research and implementation.

We call this the “AI winter”.

Things were looking awfully spring like.

Over the last 20 years there have been a small class of AI known as Machine Learning (ML) that has taken off and made huge progress. We now have fairly reliable mechanisms for dealing with all sorts of real world input and dealing with a wide range of concepts in a somewhat robust manner. About 10 years ago the running joke was that once something from AI worked we rename it to ML to protect it.

Modern ML can tell the difference between speakers, translate speech to text, find and label cats vs dogs in video and all sorts of wonderful things. If you play with it, it really does work and it works well for a range of problems. There are quite a few applications where I have collected large training corpus and generated ML models (SVM, RNN and CNNs) that did what they were supposed to and in many cases better than a human could. It isn’t rocket science to use, but it sort of is to use well.

Example of Yolo object recognition

So we now have a suite of tools that have the ability to pick out complex objects in busy images, extract words from noisy speech, and derive meaning from long convoluted text. That’s pretty cool. So cool that most people assume the machine has gained some amount of intelligence to accomplish these tasks that only humans could before.

The ugly truth

While modern ML is an amazing step forward, don’t think of it as intelligent. The ML algorithms are very advanced statistical recognizers and classifiers. Some of the algorithms have been applied to very complex mappings to even create human like work (or human superior work). When you look under the covers though, it is millions to hundreds of billions of rudimentary “states” wired together to form higher level abstractions.

If that sounds like something intelligent, it isn’t. Think of it more as a hyper complex ruler that exists in millions of dimensions at once. When you put the ruler up to the data the center of the input can be found on the ruler and the algorithm looks up the numbers and see if they have a name.

Of course it is more complex than that and that ruler can actually be a complex mapping. But the billions of measurements are more like a database than a equivalent to the human mind. There is no reasoning or deduction, unless it was already represented in the training data that went into the model. New concepts aren’t “learned” so much as the statistics are improved with new examples.

Thought in AI is about where it was when Prolog and LISP were considered ideal languages for AI research. The community decided to work on the interface to the real world and it paid off with new algorithms that could use the latest silicon to do real work that has never been possible before. Everything inside that interface layer is about the same as it was 30 years ago though.

There has been no real advance at the heart of AI in the last 30 years.

Let that sink in. If you are in the industry you may disagree with me, but show me the new reasoning or deduction concepts? There isn’t even a new way of representing knowledge. ML often just outputs a few numbers that old school programming maps to something more complex.

If you are outside the industry this may feel wrong. You keep seeing marketing that talks about amazing things. They are amazing things in the pattern matching world. They are just parlor tricks in the world of intelligence. Not unlike dogs and horses that count for treats.

Today’s AI has no thought, reasoning or ability to draw a conclusion. Every time I hear someone pontificating about the dangers of AI I sort of chuckle and then I get a sinking feeling knowing that the majority of people agree with them. We shouldn’t worry about AI taking over, going rogue or deciding we are obsolete. That’s like saying the toaster decided to kill the family. A toaster can kill by burning down the house, but the human is still in the loop through action or inaction. Modern AI is at the same level as the toaster.

ML winter

ML is still struggling to grow without adding exponential amounts of computer hardware underneath to advance. There has been some great work where models are getting better with a smaller number of states, or less training data. Looking at ML from 50,000 feet, I see the concepts headed towards their logical end though. At hundreds of billions of parameters (a parameter is a value in a matrix somewhere in the model), we are pushing the limits of the silicon just as silicon is hitting a dead end.

So maybe we aren’t quite in an ML winter, but it sure does feel like autumn. The trade space will continue to improve, but it is in a cul-de-sac. There may be some nice advances that carry it a few orders of magnitude further, but each order of magnitude gives smaller gains from a human’s perspective.

Let’s compare the MobileNet series with the Yolo series for object recognition. MobileNet can find cars and people with a few million parameters. Yolo can differentiate between types of vehicles, or men, women, boys and girls in images while using hundreds of millions of parameters. The computer scientist is floored by the difference between them. Everyone else wonders why it was so hard that you needed 20 times the hardware.

We see this with GPT-2 and GPT-3 models also. You can run GPT-2 on a desktop. GPT-3 requires a small cluster of machines with high end GPUs. Now GPT-3 will knock you socks off in that it can write long form text and snippets of code. If you examine the output though it doesn’t hold up for more complex concepts very well. Is it really worth all the hardware for the ability to write a little simple code or the abstract to this blog?

Depending on the advances that come out of research and their applicability to the real world I see the ML autumn lasting a few more years. The first hard frost should set in by 2025, but I’m wagering we will see it in 2022.

The ML winter is inevitable at this point. Too many expectations have been made and money has been invested. As the gap between the real applications and the investors concepts doesn’t get filled is where the first arctic blasts will come in. We will go from AI being laughable, to mandatory for investment, to scorn.

The frost bite

The AI winter was bad. I expect the ML winter to be bad also.

We lost a lot of very interesting and promising technologies and concepts to the AI winter. Prolog and LISP were victims of perception that didn’t deserve to be hurt. LISP is powerful enough that it has crawled out from AI winter, but is still limping where it shouldn’t be. Logic programming from Prolog though didn’t make it and we all lost there. Other areas like Fuzzy logic never had a chance outside of a few Japanese appliances.

The ML winter may have some really regrettable causalities. The advances in ML over the last 15 years are really ground breaking. Computer science is now able to build programs that ingest real world data and apply meaning to it without a human. That is one of the most important advances in CS ever and loosing it because it got a bad name from over hype will be a crime against our future.