What are we working toward? AI (Artificial Intelligence) with Autonomy, or HI (Human Intelligence) with Blockchain?

In my humble opinion, Gilder is merely challenging the current media mindset and focus on Autonomous AI as our future by introducing a new alternative mental model to the “singularity” frame held in the minds of the masses and technophiles alike. Over the past 20+ years, we have had lots of news and entertainment that have hyped AI as our medium/near-term technology future. The problem is, the picture painted by this future is not all that positive. Sure it’s convenient when GoogleVoice understands what we say and offers coherent replies. But, what happens when someday GoogleVoice decides that what we ask for is not important? Or worse, it wants to give us incorrect information – for whatever reason.

Real or not, that’s the fear. Consider a few AI related movies like iRobot, AI and West World, etc. Or worse, we all fear the HAL9000, yet these are billed as now achievable in the near/mid-term – no matter how scary. What if like in 2001, they turn out to be dangerous, how can we be sure once we let these thinking machines go off on their own. Some of our tech “leaders” (i.e., Musk and others) and the masses don’t seem to think it’s safe. And if pushed much further, without strict oversight and too quickly, might some of the masses pick up pitchforks? Does fear of AI ease or fan the flames within the current political environment?

In my opinion, AI is surrounded by fear and lots of questions like 1) how does AI make money for the masses – when it takes away everyone’s job? 2) current autonomous vehicle technologies like Tesla’s self-driving cars (whether AI or not – it feels like AI) have a poor crash record, sometimes killing the passengers in all-consuming fires. 3) sprinting Boston Dynamics AI robot-dogs (and humanoid “Terminators”) look like they could run down and kill everyone, 4) AI-powered micro-drones that fly explosive shape-charges into human skulls and blow brains out!!? That’s uber scary.

All these AI future-technologies represent the terrifying, out of control, technology future. Technology in the past was always sold as utopian and improving the lives of people, not imprisoning them. A future of killer cars, terminators, and undetectable flying micro-assassins is not hopeful. That’s insane, infringes on human rights, and it brings me back to my initial question – what are we working toward? Given these thoughts – AI is not capturing the hearts and minds of the masses to the extent that any negatives are far outweighed by the positives.

So until the singularity becomes (safely) viable, the tech industry needs a new near-term technology target. It needs to be something achievable, something maybe already here, and just in need of investment capital. The tech industry does not invest in basic science without a near-term return to justify the investment?

So why not divert some of the investment capital now flowing into a readily viable investment – that solves real-world problems and is highly accountable – kind of the opposite of AI. What Gilder may be challenging thinkers (writers, entertainers, and intellectuals) consider alternative mental models to the scary AI future, and create a future technology vision the public can embrace and emotionally invest in. Something that investors will want to invest in.

The idea of promoting Blockchain, a highly accountable database could, even if only in the minds of the masses, be used to keep AI in check. Who knows if it can or if it will, but that’s a better vision than AI going forward wholly out of control. So it begs the question, can a single source of truth, as promised in blockchain technology, Make AI, safer and more accountable to its human creators? Who knows if it can prevent deathtrap cars, killer robots, and assassin drones. Autonomous AI just feels like a wrong next step. HI – Human Intelligence with Blockchain “feels” safer – at least for now.

20 votes, average: 4.95 out of 520 votes, average: 4.95 out of 520 votes, average: 4.95 out of 520 votes, average: 4.95 out of 520 votes, average: 4.95 out of 5 (20 votes, average: 4.95 out of 5)
You need to be a registered member to rate this.
(2617 total tokens earned)


  1. Ian Jeffreys @DNAian

    Great article Thomas! AI and block chain technologies will definitely be part of the near future. I’m not sure if will guarantee safety but who knows. I’m thinking about the Borg (AI) in Star Trek and the hive mind known as “the Collective” is similar to block chain in my view. We certainly don’t want a “Resistance is futile” AI/Blockchain world. Cheers! Ian

  2. CryptosDecrypted

    Thoughtful and well-written article @thomashallgren. HI seems like a safer option doesn’t it? Sadly AI in all its magnificent potential will be explored and rolled-out regardless. I genuinely worry about the implications of full-fledged/aware programmes (even if decades away) because the master-self relationship is self-evident. Enslaving something that learns 100k faster than you seems futile but the hubris of man is boundless.

      1. CryptosDecrypted

        A good idea – though defining exactly what constitutes ‘AI research’ would be very difficult. Even if passed, it would be ignored – too much money to be made and research would simply move to jurisdictions that didn’t impose legal limits in my (admittedly ill-informed) opinion.

  3. SouthernCrossroads

    Nice post. I have recently read Life After Google, where he discusses near future solutions. The AI is a facsimile of intelligence,not true understanding. Mankind can be bested at any task by a machine. Where mankind really excels is at being second best at a great number of things. The diversity of tasks that we can be second best at make us superior to machines that are first best at a few tasks.

    HI vs. AI really in the near term comes down to selling your persona for free stuff or joining a market to trade for access. This is the first battle Data Autonomy over Planned Providers isolating us from our privacy.

  4. Paul Oluwanisola

    Great insight towards a future potential danger. I also had a personal reflection on how AI could really improve us and our well being. I think it could only benefit us if their design is of moderate logical function.
    Just like you already mentioned it “blockchain” should be our priority focus towards restructuring many of our endeavors

    1. Thomas Hallgren Post author

      Hi Peter, Well a few thoughts come to mind. There will always be “open source” or private code that wants to run publicly on devices and the internet. However, what if AI standards were adopted and there were some safety and regulation standards for the development community, and the internet itself. Bad AI could be defined, and steps taken to mitigate it in the future like viruses are tracked down and stopped today.

      Two preliminary steps come to mind: 1) Could there be some certification for AI developers background checks and Github-like code check-in. 2) Could all AI projects and code require registration certs to run on the public internet? Browsers now reject sites without SSL certs, why not something similar on the internet for AI? Would it be possible to block any unregistered AI from internet access?

  5. Ilia

    Interesting article and deep thoughts Thomas! Progress accelerates growth. This process goes like an avalanche and in the 20th century growth accelerated many times, and blockchain and cryptography as a catalyst launched a chain reaction… where we will be and what we will see in 5-10 or 20 years… I am sure we are waiting for a lot of amazing discoveries! Thanks for the great article!

    1. Thomas Hallgren Post author

      Thank you, Ilia. Your feedback is much appreciated. Given more time I hope to post more thoughts regarding experience-altering technology factors that affect how users interact with applications including AI, AR, IoT, and the blockchain. Your feedback and the feedback of others on Trybe.one are certainly motivating :). Have a great weekend.

    1. Thomas Hallgren Post author

      Not just eluding the average Joe. There aren’t even enough engineers that understand how things are working. Some things, not even the creators understand. That’s a problem. Like a virus, the researchers better keep this locked up somewhere it can’t get out – at least until they know how to create it.