College Questions: What is the future of AI-Human symbiosis?

Posted by | College Questions, Random Thoughts | No Comments

Going to try out a new series where I take questions from random young adults in college on topics surrounding, technology, economics, product development, software, and everything in between. The challenge for myself in these is to literally write this in one, 1-hour sitting – with no editing – and vomit my thoughts out. I’m fully expecting these posts to be random, wild, hyperbolic, and meandering 🙂

To kick this off, I asked a friend and former short term intern – Carson Young. Here is the direct quote from our texts:

“With regard to AI becoming symbiotic with humans or we are the “biomatter”, how would that look? What is the future of neural-link but from a broader perspective? What’s the future of AI symbiosis?”

Psch…. such an easy question to kick this series off… /s

To start with, let’s get a quick grounding on what NeuralLink actually is. NeuralLink falls under the bioengineering category of “brain-machine interfaces” – otherwise known as BMIs. This research was largely kicked off back in the 1970’s at UCLA where DARPA provided a grant to spearhead the research. That was the first BMI related research, however, I’d argue that the real critical research started in 1924 with Hans Berger’s discovery of brain waves. He was, in essence, the first scientist to measure brainwaves with his development of electroencephalography. Or for us layman…the EEG.

There’s been a ton of research performed where humans are provided a task to perform while hooked up to an EEG with the exhaust data producing patterns of brainwave activity required to perform said task. Now, EEGs are considered a noninvasive method for monitoring brainwave activity – meaning that you don’t have to crack open the skull to do so. The downside to this is precision. The brain is super complex so fine-tuning the EEG monitoring down to the neural mesh level is extremely difficult.

If we hop back to BMIs, most methods leverage microelectrodes which are effectively super small electrode monitoring devices. Since the electrodes are physically implanted into the brain, we can get a high degree of precision and locality within monitoring wavelength patterns. The obvious downside to this is that it is both invasive and cannot be made to meet clinically transferable standards because everyone’s brain is different.

So, what has NeuralLink built that is different? From my interpretation of the whitepaper, they’ve effectively developed a new type of electrode and “meshing” of these electrodes. In their words…”minimally displacive neural probes that employ a variety of biocompatible thin-film materials.” To me, the thin-film materials are the most important innovation. Instead of being a specific “point” as other electrodes are often created, their thin-film devices enables a much higher density of sensors to monitor brain activity. You can think of the actual “mesh” of the brian device as a bunch of various threads with sensors in each thread, creating the neural-mesh that most people talk about and that sci-fi tends to evangelize.

I should stop here and say that there is more than just the “thin-film” that is innovative in the whitepaper. They appear to also have developed a new surgical robot to install mesh inside the brain as well as new sensors. I’m avoiding that topic simply because the question at hand is more related to the future of this development.

So, on that note, what the f*ck is the future of this going to be??

Like any predictions, you can take an optimistic or pessimistic approach. For me, I believe that this technology will take a while to develop and even longer to become mainstream. It’s not going to be an easy sell to humans (and more so insurance agencies) to say “hey, implant this thing in your brain.” Nothing could go wrong, right?

There are a lot of concerns I have when I take the pessimistic approach. The first thing that comes to mind is security. I imagine that these devices are going to be specifically designed as one-directional units; meaning that they are simply monitoring and passing information along. It would seem foolish to have it be a bi-directional device simply from a security standpoint. We don’t want people going around and brain hacking each other because you bet your ass we’re that dumb.

Some major open questions that I think will need to be answered before going mainstream:

  1. Will these devices have kill switches?
  2. How do software/hardware updates work? (LOL – imagine regression testing these things…)
  3. What is the physical safety of these devices? (eg. head impacts causing the mesh to move)
  4. What is the lifespan of a neural mesh?

If I put on my futurist hat on, I think BMIs have significant potential for changing humans. Information retrieval would be the most interesting and immediate impact these devices would have. I could effectively search for anything at any point. Humans become a walking fucking Wikipedia. Now, the key here is that I think this will help people be able to retrieve information more effectively but I don’t think that it will dramatically impact intelligence or creativity. Yes, we will be able to learn a broader swatch of items. That said, comprehending and complex problem solving isn’t solved. Those skill sets are learned through teaching kids as well as a large genetic component. So, I think there will still be a large cognitive disparity for innovation but I believe that it will help humans overall become more intelligent.

One random thought that popped into my head is how this would change interaction patterns of humans – specifically in social dynamics. For example, if most people have a BMI, how does that impact the dating scene? There are obvious humanistic qualities and genetics that create desirable traits from our evolutionary history – one of them being intelligence. Can you imagine being on a date and immediately retrieve deep historical facts around any topic to woo and win over your potential partner? How bizarre…

If I take Caron’s question more deeply and focus on the word “symbiosis”, I imagine that BMIs will have an ecosystem of other devices they work with – both on your body and in the physical world. I imagine you’d be able to hook it up to a hand device where that hand device can be inserted into other objects, such as a car. From there, your brain is locally hooked up to a car and symbiosis becomes true in every sense of the word. We are already testing haptic feedback suits…imagine giving any inanimate object a human brian and experiencing haptic feedback from that – both physically and mentally?

One of my buddies works at a company called Neurala where they effectively wanted to implant an AI-brain in inanimate objects (like warehouse robots, drones, etc.). With BMIs, we could take something dumb and amplify it with human intelligence based on the interaction points available in that object. For example, with a car, you could imagine a sort of API where, once your BMI is connected with the car, you’d have access to control a menu of items from this dumb device (eg. steering, blinkers, throttle, brakes, car sensors, stereo, etc.). It’s actually kind of creepy thinking about this further, especially if the device becomes bi-directional. You’d basically assume the responsibilities of the sensors from whatever device you’re taking over, creating a perceived extension of your body. What a crazy thought! Imagine having a drone that you could “interface” with. You’d be able to fly that drone wherever with your mind and, assuming bi-directional, you’d experience the “feeling” of flying based on what the sensors provide back to the BMI. The feeling of “height” or “speed” would be a sensation to the human nervous system, creating that tingling feeling and sweaty palms as you took a drone over a cliff of – let’s say – the grand canyon. We’ve all seen the DJI Phantom videos of flying a drone through a sweeping landscape. Now, imagine the “feeling” of that. Pretty nuts.

Although, that brings up something equally as bad which is having a BMI hooked up to something like a strike drone for warfare. Mentally controlling rockets until the point of impact…Where there’s good there’s also evil…

I could also see this manifesting in a totally opposite way…like the Matrix. I can imagine a world where you plug into the “network” and “surfing the web” becomes something more interesting. You could probably pair this with VR/AR to some degree as well. It would be a pretty cool experience being immersed in VR with your brain controlling navigational aspects of web surfing. When you visit Reddit, you really visit it – both mentally and visually.

As I just wrote that, one thing that did pop up as a concern is breaking what “reality” actually means. You hear stories about humans trying LSD or shrooms and visually experiencing a different reality. I wonder if a neural mesh could interfere with the brain’s ability to craft reality for us. I could see that potentially being both awesome and terrifying. I’ll think about that later since that’s a rabbit hole.

Getting back to reality, I think that the early stages of this development is really exciting but will need a large amount of progress in order to be deemed “valuable”. NeuralLink has been able to get 3,072 electrodes in their mesh, which is impressive and a great start. However, estimates provide that the brain has over 100 trillion neural pathways. Just capturing .0001% of the total neural pathways means that we would need to have somewhere around 10,000,000,000 electrodes. Now, I’m sure this is unnecessary given that we’d probably be going after targeted functions (eg. what does moving “left” vs. “right” look like in the brain). I imagine that this device would sort of be like an SDK where you implant it in someone’s brain with a whole bunch of functions, and then there’s a “learning/programming” period where the human goes through a series of actions multiple times that they want to program in the NeuralLink. Once the device captures and programs that, the human can simply think of that action and it will be able to respond. So, getting some ten billion electrodes may not be necessary depending on the complexity of the actions that are going to be allowed/recommended from the device.

It’s been an hour so I’ll stop writing here. Wasn’t expecting this to be a super thought-provoking piece but some of the thoughts around assuming the role of inanimate objects make this a much more interesting topic to explore. It makes me really wonder what it would be like to assume the sensation and “endpoints” of things that my BMI could interact with. What a world that would be.

Thanks for the question, Carson. Hope this was an interesting read for you and whoever else reads this! If you have a question that you want my mind to explore for a bit, feel free to hit me up on Twitter or on my Contact page.

On Sudden Losses

Posted by | Thoughts on Life | No Comments

Without going into too much detail, we lost one of our dogs in a freak accident recently. My wife and I were out on a date when we got the call from the emergency room.

It’s a highly emotional moment for both of us when something like this happens – especially when it’s sudden. For folks that know us well, our dogs are our children. We know each of their personalities, quirks, mannerisms, barks, and even their barking styles and what they mean. Sure, call us crazy and maybe we are, but the love we have for these dogs transcends just a simple relationship.

It’s hard to describe the range of emotions that one can go through when something like this happens. We all cope differently. My wife does it all at once. I tend to chunk my grief into stages and spread it out to ease the pain. There’s no one way to cope with a sudden loss of someone you love.

It’s a cliche saying but make every moment count. I was putting on my shoes in the morning when our lost dog came and nudge me to scratch her neck. That was my last memory. For my wife, hers was cutting her nails, grooming her hair, and giving her a kiss on the nose. You never know when something tragic will strike. I don’t think it’s reasonable to always live like tomorrow won’t come, but I do think I’m realizing it’s becoming more important to take more frequent moments to appreciate the things in life that you have – especially those around you that support you (whether human or animal).

We tend to ignore them because we’re too busy with our lives, focusing on what we perceive is “important”. Money, jobs, politics, news, whatever. At the end of the day, we all die. When we die, the moments we remember that create a foundation and central tenancy for who we are is what we will remember. I believe those moments are often reaffirmed and hardened through deep emotional connections with those that support us through thick and thin.

There’s a great quote that we have on a painting in our kitchen: Dogs are not our whole lives, but they make our lives whole.

You were way too young to go. We’re sorry we couldn’t protect you. I hope you know that we love you very much and that you mean the world to us. I hope there’s a lot of giant fields in dog heaven for you to play in. We’ll miss you, Eve, and I hope you forgive us.


Companies Lie To Themselves

Posted by | Thoughts on Enterprise Software | No Comments

I have a bone to pick. Having worked for a decent amount of enterprise focused software companies now, I’ve determined something that truly bothers me. I think this may be very specific to software (perhaps not though!).

Companies are constantly lying to themselves – and it’s making them worse.

I mean this in particular towards the sales and product teams. I understand why they do it but I disagree with the entire premise. Here’s how it goes.

We’re in a competitive deal. There’s multiple other potential vendor plays. It’s the final stages of the deal and BAM! The champion delivers the email no sales person wants to hear: “We went with another solution”. The org starts freaking out, tries to save it, but ultimately can’t. We send in competitive research to do a post-mortem and figure out what happened. Then, we send out an email to the broader organization, generally saying something like the following:

  • “They weren’t the right market fit size for us”
  • “They didn’t have enough budget and we were too expensive for them”
  • “They aren’t mature enough yet for our solution”

Here’s what I don’t like about the above: none of it comes back on the company. It often times feels like we pass the buck on why things didn’t go well. While statements may be true, there is significant product development feedback in each of those quotes.

In a little bit of a rant, what I hate about it is it creates a culture of “us vs. the customer”. Teams read through the emails and don’t view it from the lens of how we, as a company, can do better. Where we can improve on in the product, whether our pricing structure makes sense, whether our product messaging is hitting the right audience, whether it’s easy enough for users to use our product, and so on. We say things like “well, guess that customer is going to miss out on how awesome we are”.

It’s that bravado that drives me nuts. Part of building software is the relentless effort to improve the offering and capture the largest market share possible. Now, I get that we don’t always want to do that. Sometimes we really don’t want to sell to SMBs because they have budget constraints and really aren’t mature enough. But what I’m trying to hammer home is that the culture of what we do with that information is what drives me nuts.

Extremely high performing organizations look at every loss as a stab wound. They triage it and figure out what moves they made that exposed them. They believe they are at war and feel that making a mistake jeopardizes their god-given mission. They weaponize the loss and turn it into energy to improve.

Bad performing organizations love the smell of their own shit and believe they’re building something akin to a cult where the customers are “lucky to have us”.

Don’t be like the bad performing organizations. Don’t be scared of worrying your employees by sharing what we need to change or improve on. Create best next action items from the loss on how to improve. Instead of building a culture of shelter from that loss, build a culture of weaponizing losses in to gains.