

For instance, if an AI model could complete a one-hour task with 50% success, it only had a 25% chance of successfully completing a two-hour task. This indicates that for 99% reliability, task duration must be reduced by a factor of 70.
This is interesting. I have noticed this myself. Generally, when an LLM boosts productivity, it shoots back a solution very quickly, and after a quick sanity check, I can accept it and move on. When it has trouble, that’s something of a red flag. You might get there eventually by probing it more and more, but there is good reason for pessimism if it’s taking too long.
In the worst case scenario where you ask it a coding problem for which there is no solution—it’s just not possible to do what you’re asking—it may nevertheless engage you indefinitely until you eventually realize it’s running you around in circles. I’ve wasted a whole afternoon with that nonsense.
Anyway, I worry that companies are no longer hiring junior devs. Today’s juniors are tomorrow’s elites and there is going to be a talent gap in a decade that LLMs—in their current state at least—seem unlikely to fill.
Ok, here’s my question for an agoraphobe.
Let’s say we one day decide to build a space colony, but it’s sort of a one-way trip since the lower gravity would acclimatize your body in such a way that it would be difficult to ever return to Earth after several years on the Moon/Mars/wherever. And you would most likely live in an underground habitat where you would maybe make the occasional trip up to the surface to walk around outside, but it would be a hassle since you’d have to get all suited up. So most of the time you would be just chilling in your man cave or what have you.
As an agoraphobe, would you make the ideal pioneer on such a frontier?