Q1 2026: How I See AI and its Future

It's January 2026. AI hypestorm happens.

Big companies laid off thousands of engineers under the agenda of "AI transformation". AI generated code and content are everywhere. Investments into AI and its infrastructure are insane.

Just a couple of years ago we actively discussed dangers of running AI without control and precise observation. Today people run AI agents in independent mode with access to the Internet and even to their bank accounts. It's called "personal AI assistant".

Opinions are divided. While some people push AI into everything, others boycott it.

If I want to keep sanity and move towards equanimity. Riding hypewaves or taking extreme opinions is not an option.

I use AI. I use it a lot at my current job, less in my personal life and projects. I discuss it a lot. I reflect on how it affects my life and work. I reflect on how it affects the world and people around me.

I reached a point when my opinion is stabilized. At the moment, I have an illusion that I understand the essence of AI and its future. I want to challenge myself: write it down and return to this article in 1-2 years to check how good am I at predicting future and grasping essence of emerging technologies.

If I succeed - it will be something to be proud of. A good thing for CV and interviews: "Remember 2026 AI hypestorm? I managed to see where it's coming!". If I fail - I will be able to reflect and improve my future prediction skills.

I focus mostly on software engineering because it's my field of expertise. I test my intuition. I want to keep it simple and save my time for more interesting things that I'll share later. ;-)

A(G)I

I see 2 paths:

  1. AGI is invented soon.
  2. LLMs just become smarter and smarter, but no quality breakthrough happens.

AGI invention is comparable to an alien invasion. No one can prepare for AGI. If you can prepare for it or control it - it means it's not AGI.

It's better to save energy and focus on things you can control and prepare for.

The rest of the article is about the world without AGI.

Essence of Software Engineering and LLMs

Code is a way to define a deterministic and reliable process that can be executed and produces the desired result. There are probabilistic algorithms, but unlike AI you can define their statistical behavior with much better precision than with with LLMs.

LLMs, on the other hand, are not deterministic and not reliable.

When writing code, you need to define all the details and edge cases. What's not defined by your code or the code you depend on - won't happen.

Unlike code, LLMs can work with informal vague instructions. But LLMs can start doing what you didn't expect or even what you forbid them to do. We call it "hallucinations".

When you do bank transaction or send a message to your friend, you expect it to be done in a reliable way. In each place where normal code is replaced with AI agents you make your product unreliable. A product that cannot be trusted is usually hated by users. In B2B trust problems are even more critical. Here comes my first prediction:

Products where critical decision making is done by LLMs are in danger. They will experience reputation loss or even reputational collapse.

LLMs as an approachable interface that cites and references trusted sources and proposes a deterministic plan to the user for an explicit approval will survive and will be widely accepted as a safe standard.

AI assistants like Copilot, OpenCode, etc. will stay and will only evolve. They will not be replaced with something conceptually different.

But this prediction makes sense only if prices stay affordable for a wide audience.

Price of the Token

Even today, using LLMs for personal needs is not cheap.

Price can go up because of "AI bubble collapse", or because you will be forced to pay more and more for frontier models, or because demand will be higher than supply, etc, etc. I see many signs that LLM pricing is going to be a problem.

GPU-based datacenters will not make it. AI usage price will go up. It will damage companies and individuals who became dependent on intensive LLM usage.

Specialized hardware ("IPUs"?, inference processor units?) can prevent this.

The first company who achieves mass production of IPUs will reshape AI landscape.

Quality Gravitation

When you hire and keep many experienced developers who know how to write good software, those people create some "gravity". It bends code and conventions towards their vision. Their influence is what keeps your current software quality bar.

When you massively hire juniors - you test limits of that "gravity". If you hire too much - your experts' influence will be diluted and quality will start to decline.

It also happens when you hire a lot of new people in general. It's a well-known challenge: to save your culture and standards when hiring a lot.

Massive AI usage is similar to hiring a crowd of juniors. Unless your current team is also a crowd of juniors - quality drop is inevitable.

While trend of massive LLM usage is present, existing software quality will decline (in average), new software quality will be a disaster (in average).

But I also expect the following unsual side effect:

Really shitty companies with massive lack of expertise will start to deliver better products while LLMs are cheap. Because LLM will gravitate them to better quality level than they had.

10k vs 1k

What is better?

  1. 10k lines of code done in one day, feature is fully implemented.
  2. 1k lines of code done in one week, feature is fully implemented.

I'm very aligned with ideas told in this article. Especially this sentence:

velocity without understanding is not sustainable

What is your review capacity? Can your engineers properly review, discuss and reflect on every line of code made by LLM?

Reviewing LLM-generated code is more expensive than reviewing code written by a senior. You cannot make LLM accountable for its decisions. Therefore, you cannot trust LLM. Therefore, you need to check more carefully. Or one day you can find your production database suspiciously empty. =)

LLM-powered reviews can save a good share of review capacity, but think about it: LLM-generated review is also subject to review. Even with such limitation, you can save a decent amount of review capacity, I admit it. But not as much as needed to properly handle 10k+ LOC per day.

Massive AI-driven speedups will have "drug addiction" properties:

First several AI-driven breakthroughs: you made the best product on the market. You shine in the lights of glory and success. You're very high. You're the best.

After some time: you feel like shit. You need to spend more and more money on AI to save your business from collapse. Your business is falling apart. You lie a lot in order to save the business.

Moderation is a key. Only companies who were wise enough to not speedup above their review capacity will prosper.

A note about layoffs:

Company that did massive layoffs will demonstrate "LLM hangover" symptoms faster. Massive layoff is a massive review capacity reduction.

LLM is not Your Friend

Ooomph... at this moment I'm tired writing this article and want to finish it as fast as possible. Remember, there are much more fun things to do than worrying about the future.

I rewrote intro to this section multiple times. And decided to give up. Straight to the point:

Extensive usage of LLMs leads to decline of cognitive skills and worse mental health. More proof of this will appear with each year (from the scientific community).

Burnout rates will grow. Personal skills will degrade.

Experts who overinvested in LLMs (both juniors and seniors) will be like "cryptobros" nowadays. Subjects of jokes, not superheroes.

Don't listen to Sam Altman and others who will have profit from your potential addiction to LLMs. Listen to what the scientific community says.

It does not matter how much you achieved if your mental health is fractured.

Conclusion

Well, not a bright picture, heh? A lot of risks, small room for error.

I'd like to find that I'm a bad prophet and bright AI-powered future awaits.

But now...

I need to stop and slow down.

And I wish you do the same. Stop and slow down. Notice your muscle tension. Consciously relax.

We have no enemies to fight. It's not worth it. You cannot fight against the storm. Save your energy for the moments that truly matter. And one more time: notice your muscle tension. Consciously relax.

Comments

To leave a comment, visit the post on Mastodon. Comments to the post will be displayed here.