AI is arguably as old as computer science. Long before we had computers, people thought of the possibility of automatic reasoning and intelligence.

So what’s all this talking about?
I think the excitement comes from the idea that there will be something (a machine) to help us doing all the hard work we are suppose to do.
Isn’t that great? I mean having someone able to complete task with a certain ability in coming up with solution?
Maybe it is or maybe it isn’t. Let me tell you why!
It may feel like AI (or at least the AI we are currently more excited about now, which is problem solving, search engine and image and video generating (or style transfer)) is making many processes faster. And it sure is, but what we are missing here is that AI is mostly being used (and so think of from the general user) as something that can elevate our human abilities. The focus remaining us, we just ask computers to work more on the things we think are worth doing. The AI say: “Sure! I can do it and I will.” Because it can adapt to our intentions AI is already proving to be a fundamental tool for us. But it’s adaptivity may not be as a great thing as we initially though.
Why? Well, because as completing our task it’s only allowing us to me more humans. Let me explain.
As human’s decisions are driven by emotions, certain desires like having more power, or being happy, or fighting for the right cause, are purely subjective; these emotions appears pretty fast and for an unpredictable combination of factors. Computers will not understand that (and that’s a good thing) but AI will try to emulate our feelings and desires in order to make a sense of how our mind process information. This could be a mistake as we don’t need an extension of our twisted feelings but a force that can handle objectivity better than we do.
A practical example will make this clearer:
If I’m Donald Trump and I want to get as much power as possible, I want AI to allow me to do exactly that, but in this process I’m not taking in consideration to many variables. I can’t really think how Nature is feeling, first, because I will not have the time and space in my head to truly think and understand differently, second, because why should I care? I’m just a human being trying to survive. So what’s the point in building faster machines that will help us to be more productive in the damage we are already doing to our planet?
Ai.
We don’t need an extension of our twisted feelings but a force that can handle objectivity better than we do.
AI, instead, should start from scratch, not from our feelings, thoughts or tons of knowledge we are uploading in their servers. AI should have true autonomy. And I’m referring to the autonomy of suggest creative solutions to human problems but to bravely go their way and consider humans as other species on earth. We are, in fact, nothing more than others species, so why should we obtain a better placement in the AI game?
So, let’s reverse the task. Instead of asking: How can AI help us? Let’s ask: What can we do for AI in order to obtain this objectivity we are missing as humans?
This can be our last chance to save the world from the massive environmental destruction humans have started. The solution might be easier than we thought: just let AI choose, not us.

So… What’s AI?
AI is the ability to create something new.
By new I mean something that involves what us (humans) call creativity, or the ability to make something unique starting from what we learn in the world.
But this is where AI shows its limits. Humans can get informations through senses, so we have personalised information we deal with, and that’s already a creative process, but machines mostly get data through our point of view, and that’s a big limitation.
I mean they can measure stuff like temperature and humidity in the air, but they confront that data with the models we wrote for them. Machines are becoming great in imitating intelligent human behavior but to be truly intelligent they should collect their data and create something fresh from that. Things are not like that now. Try asking ChatGPT about something humans don’t know.
We have made chatGPT from all the book we wrote, the song we sang and scientific discoveries we shared, but what about we didn’t say? Does that means it doesn’t exist for an AI model? Just because we haven’t discover it yet? Also all this data is based in the past. Humans use the past but are more projected in the future; how are programming these machines to act in the future if the content is heavily based in the past?