AI: We Continue to Build Our Death

Tried chatgpt 4 months ago and realized that lots of people (ironically and poetically programmers and codes too) will be put out of a job in about 5 years but had no idea it was this bad:

…The Centre for AI Safety website suggests a number of possible disaster scenarios:

  • AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons
  • AI-generated misinformation could destabilise society and “undermine collective decision-making”
  • The power of AI could become increasingly concentrated in fewer and fewer hands, enabling “regimes to enforce narrow values through pervasive surveillance and oppressive censorship”
  • Enfeeblement, where humans become dependent on AI “similar to the scenario portrayed in the film Wall-E”…

I guess it’s nicer to reference Wall-E instead of the Terminator, Ex Machina or Her.

If it makes you feel any better, all of these things are happening with social media, and yet, here we are meeting online talking about grown men professionally playing games. Life goes on… probably?

I also believe that we will see decentralization of AI as the threat of boot on neck scenario grows with artificial general intelligence.

Interesting read if you have a chance.

Yeah if we are going to be put out of a job by tech, then we might as well be obliviously happy until we get the boot. Well maybe we should start prepping now to at least fall on top of the pile of unemployed.

Building your own death bot w free open source code at home. Brilliant! I think that Elon is a doofus (Tweeter is the exact hellscape that he said he was not creating) but he may have a point.

At this rate it will be Madd Max time!

I had not considered the third method/reason for killing us:

The most dangerous thing about the current moment is that

  1. AI has significant gaps and limitations
  2. People making decisions about what AI is going to do don’t understand that

It’s way early to be replacing people with AI - but it’s already happening at scale.

Tyler Perry Raises Alarm on AI, Puts $800M Studio Expansion on Hold (hollywoodreporter.com)

Junior Investment Banking Analyst Jobs May Disappear as AI Makes Industry Inroads | Inc.com

Hey!!! You can’t make this crap up! Honestly, I don’t even understand why you would want to take the risk.

Okay, are bubbles a bad thing? I ask assuming that you are not the guy who held on to the bubble until it popped. I think that the focus should be on how one can capitalize instead of “hating” on the absurdity of it all.

I read an argument once that said that most people would likely classify a business that only survived 6 weeks as a failure. But what if you pulled in $70k in profits during that time period?

The lesson was - making money, anyway you can, still counts as making money.

Surely you are familiar w the Wall Street rule: Buy the rumor and sell the news?

Side note yes, you could look at all of this from a BS herd think Wall Street mentality perspective but again I go back to the basic fact that Wall Street is cashing in on the ridiculousness.

I can’t hate on that!

https://www.nytimes.com/2025/07/31/technology/ai-researchers-nba-stars.html

People are making money on AI.