College Questions: AI and Jobs

Posted by | October 20, 2019 | Random Thoughts | No Comments

If AI takes over and we lose “jobs”, where will people find their meaning, creativity, etc.? Could AI become far enough advanced that we would slide down the food chain? Can AI learn common sense reasoning?

An AI question this week! This will be a bit of a philosophical, rambling post since I’m writing this at 30,000 feet on my way to St. Lucia, so no internet! This means that instead of the normal thought gather and internet searching I do for the 1st hour, I’ll be just be dumping my thoughts.

To start with, there’s a handful of complex questions here that I’ll be breaking out individually. The first is the age-old theory around machines taking over jobs and replacing humans. While I’m a big believer that even strong historical trends are not a good indicator of what will happen in the future, I do believe that, so far, every time that we’ve created a “job replacing machine”, humans end up still finding new work.

If my memory is serving me correctly, there were many times in the past where an invention that resembles replacing a human, humans have typically reacted poorly by doing things like burning the machines, lobbying to stall progress, etc. 

In the long run, it is my belief that jobs end up getting offloaded to machines. I say offloaded specifically because it implies that we task the machines to do the mundane and boring. As we enter the world of ML and AI however, machines are becoming quite amazing at doing complex tasks and exceeding human capabilities. This last point brings up where my thoughts stray from the historical trends.

I’ve seen ML/AI that can craft beautiful music, create incredible novel pieces of art, drive cars, detect cancer at a higher precision than humans, and more. This is definitely worrisome because historically highly skilled professions, such as radiology, had a significant moat around them. The jobs we historically automated away were lower tradesmen jobs. Now, we’re going after the white-collar.

In this new paradigm, what I believe may happen is that humans will exceedingly start to create tools that they can leverage in isolation or concatenation in order to compete strategically. What I don’t think machines will be able to do effectively is a win in the arena of business strategy or where human emotions need to be evoked. There’s a certain level of complexity that is hard for a machine to evoke.

Let’s take consumer branding as an example. What is difficult about really amazing branding is crafting a story that taps into the storyline that potential customers have experienced themselves. Unless the machine has a deep understanding of its consumers from a humanistic perspective, it’s going to be very hard, if not impossible, for it to craft branding at that level. If we were to dive into a specific example, we could look to New Belgium Brewing. Their craft beers attract a type of eco & socially conscious type of crowd based on their designs, the words they use, the events they hold (eg. Tour de Fat), and so much more. It’s been an embedded story in their ethos since day 1. How could a machine be able to replicate that?

Getting back to the 1st core question, I think as jobs are taken away from humans, humans will end up circling back to the arts, macro business strategy, creative fields, or highly complex macro fields. On the last point, this would look something like creating a macro-level vision for humans, such as becoming multi-planetary. A mission like that has most of its purpose rooted in the “why not do it” and the means of getting there is where humans & machines work together to get there.

On the second question of whether humans can slide down the food chain, I think the short answer is yes, this could happen but feels unlikely. It’s a yes because theoretically it would be possible but would require a ton of dependencies. Machines becoming sentient will likely take many decades and, even then, would have limited scope on what they could accomplish. The example Elon Musk usually provides is the email spam problem. If you give a strong enough AI the task to remove all email spam, the deduced logic reasoning would be to get rid of humans because they create the spam in the first place. Now, the means of “removing the humans” would not be a simple effort for the AI. For starters, it would need to tap into key items that could harm humans. However, before that, it would need to know what harming humans look like. Before even that, it needs to know what harm means to a human (eg. verbal vs. physical harm). All of these would be very complex “things” that it would need to learn through training data sets. And sure, someone could theoretically create a training data set for it, but then a whole slew of other questions come up. For example, if the training data set came to the conclusion that using guns to harm humans, how would the AI know how to wield one? How to use one?

Of course, that’s a more physical example. There are likely other examples that could come to fruition, like blowing up nuclear plants (see Student) or infrastructure-related damages. Overall, however, I think that as humans continue to develop AI technology the vast majority will course correct the AI towards a favorable outcome instead of a detrimental one. That’s not to say we shouldn’t be cautious and provide or create guiding mechanisms to ensure that we go down the right path.

Last question: Can AI learn common sense reasoning? I think my answer to this is that it depends on what you mean by “common sense reasoning”. The reason I say that is that common sense is in the eye of the beholder. What is logical and easy to reason through for one is illogical and complex for another.

Let’s take an example of DOTA. A group of researchers has created an AI system to play the game DOTA. This is a MOBA with 5 players versus another 5. You can choose different champions with different types of skill sets. The composition of the team heavily dictates how the game will go. After many, many hours of training the AI, it was able to beat the top-ranked players who had spent years reaching to the top. When interviewed, the players said that it had inhuman-like speeds and took completely different, unpredictable routes to beat them. The playstyle was so effective that it took the DOTA eSports field by storm with many teams adopting elements of the AI play style.

For the AI, those game mechanics were common sense. For humans, they were uncharacteristic of anything they’ve seen. So, the short answer is yes, I think in particular cases that AI can develop common-sense reasoning. However, I think that it will be largely isolated to one problem set and wouldn’t be something completely transferrable. I don’t think you could take elements of logic that the AI learned from one game and easily transfer the same sort of common logic to another game. I’m probably wrong but I’ve yet to see any studies around effective reasoning transfer between training data sets without having to go through a significant amount of model contouring, smoothing, and modifications.

That’s it for this week! I’ll likely not be writing for a week given that I’m traveling but stay tuned as there will be more content coming.

Leave a Reply

Your email address will not be published.