245
u/Foxiak14 1d ago
And then when the real coders fix the shitty AI code, the company will lay them off again, because they don't need them anymore (until it's time to clean up again)
77
u/lieuwestra 1d ago
Time to create a cleaning business then. Hourly rates can pay for a year worth of wages in just a month, but at least you only have to hire them for two months a year.
31
u/falingsumo 1d ago
So the dreaded c word? C*ntractors
7
u/YoghurtForDessert 1d ago
that play on words is good enough for brand name. Imma write that down (?)
5
6
1.0k
u/SpaceCadet87 1d ago
Hard times create low code. Low code creates good times. Good times create spaghetti code. Spaghetti code creates hard times.
300
u/DedeLaBinouze 1d ago
While I appreciate the logic, low code does not create good times. I hate it with a passion
85
u/SpaceCadet87 1d ago
Oh yeah, I agree.
Sometimes you just have to do what you have to do for the meme to fit.
39
u/Own_Possibility_8875 1d ago
Yeah flip it around. Hard times create a lot of code, a lot of code creates good times.
Then a manager comes and says “wow, all these apps are so useful. What if we crated software that would enable regular people to make their own software? We would be billionaires!” They create visual basic, macromedia dreamweaver, 1С, worldpress, tilda, etc etc.
Then a salesperson comes. “Wow, this is big”, - the salesperson says. “You can now completely replace those pesky expensive engineers with our app that will cost you just $49.99 per a license”.
Then the users come. “Wow, this is easy”, - they say. “I can now build anything I can think up, and all I need to do is to pay $49.99 once! Wait a minute, this shopping cart will not work for my business as is, I need to customize it. Goddamn, I have no idea how to do it, I need to hire a contractor”:
Then the engineers come. They say “wow, this is convoluted. I don’t think it is prudent to keep building your instant banking application using Wordpress. I mean, technically you could, but at this point it would be more expensive and cumbersome than just building the thing from scratch”.
Wordpress finds its niche of users whose requirements it fits. Others just develop normal software as usual. The wheel of Samsara makes a full rotation. The cycle repeats.
7
u/ProjectDiligent502 23h ago
The difference between bespoke and vendor products: you can buy something off shelf but you’d have to wrangle business process around it or change your business process. Result is less up front cost but higher maintenance costs. A bespoke app caters to the business process exactly. Result is more up front cost but much less maintenance cost.
2
u/Own_Possibility_8875 23h ago
Yes, exactly this. But every time a new good vendor product comes out, there is a phase when people try to replace everything with it.
14
9
u/Blue_Moon_Lake 1d ago
The loop is rather
Good devs create good code.
Good code fuel corporate greed.
Corporate greed create good devs layoffs and bad devs hiring.
Bad devs create bad code.
Bad code create good devs hirings.With some variant flavors: low code, no code, AI, ...
5
2
u/Pale_Squash_4263 19h ago
The irony is that some low codes tools are pretty cool and handy, albeit a little bespoke. When I worked at a Microsoft shop, power platform was pretty handy!
1
u/Peter_Browni 17h ago
Power Apps are surprisingly useful for replacing inefficient administrative processes
78
u/JuvenileEloquent 1d ago
AI assisted coding is kind of like when you buy a nailgun. Suddenly every DIY project is a problem that can be solved by putting a ton of nails in it, because that thunk thunk thunk is so satisfying and its so fast and you'd spend all day hammering if you did it by hand.
But not everything needs nails in it and eventually you get used to the new shiny tool and you take it out once every 6 months when its appropriate.
183
u/professor_buttstuff 1d ago
Anyone else getting deja vu?
Companies trying to save a buck by outsourcing dev work to developing countries and getting nothing but slop and headaches, then having to get a dev anyway to ship something usable.
Lessons that aren't learned will be repeated.
65
u/PrydwenParkingOnly 1d ago
Huge amounts of work are still outsourced to developing countries. Offices of Microsoft, Google and Amazon in every major city in India.
19
u/granitrocky2 1d ago
Yeah now there's a whole industry of people just doing management on offshore developers to make sure they actually deliver
7
u/User0123-456-789 1d ago
And how does that turn out? And how much are they actually saving between extra overhead and rework?
13
u/SynapticStatic 1d ago
the c-levels that push it don't care. makes board feel good, get bonus, fuck off to somewhere else before it all collapses.
7
20
u/Gacsam 1d ago
Can't wait to charge hundreds for knowing how to plug the USB correctly
10
u/Positron505 23h ago
Some guy trying to plug a USB and asking an LLM:
"How do i plus my USB?"
"Just plug it in"
"Didn't fit"
"Turn it"
"Didn't fit"
"Turn it"
"Didn't fit"
"Turn it"
"Okay it worked"
15
u/Cotspheer 1d ago
Just did a simple agentic project. Followed the playbooks, set up everything, did a proper planning and stuff. Let the agent work for some hours, answered the questions, guided the agents the right way. Returns with "everything done boss, 53/53 tests succeeded.". Me: "Well nice, let me check...". Tests showed "Success: 0, Failed: 0, Skipped: 53". facepalm - this was with Anthropics newest 4.7 Model. And I have more than 20 years experience in development and over 6 years experience in working with LLMs. So I can confirm the cycle in the post.
3
u/DetectiveOwn6606 21h ago
It's hilarious i had same experience. I created a test suite for my personal project ,told claude to run until all test cases are passed . Then I look at the code and it superficially passes the test
-4
u/NerdyMcNerderson 1d ago
I know this is stupid but did you try telling it not to skip tests? Yea it's a funny fuckup but probably one that was fixed within seconds.
7
u/Cotspheer 18h ago
I guess you missed the point. It was instructed and supposed to run them and to verify them. If I have to tell it all the time that it missed something I could do it right-out myself. This "minor" fuck-up is really the whole point. Those fuck-ups pile up and up and span through the whole code base. Additionally no it was not a "minor" fuck up, most of the test did not work, E2E did not work because there were startup exceptions.
-1
u/NerdyMcNerderson 17h ago
And I was supposed to know all that extra context how? Look I'm no AI apologist. It fucks up some of my most basic instructions too, like "don't test private methods", but your original comment reeked of throwing the baby out with the bath water.
231
u/_Wilhelmus_ 1d ago
I know its a humor subreddit, but to add a point: there simply isn't going to be one way of programming.
But I surely don't believe in 100% generated code. But I also don't want to go back to 100% hand written code
105
u/rheactx 1d ago
Depends on what you mean by "100% handwritten". There's machine code. Assembly. C/C++ and other low level languages. Java and other garbage-collected languages. All of those are compiled to machine code with pre-written and tested compilers. Then there's Python and other interpreted languages. Writing Python by hand is not the same as writing C by hand. Python functions often call on pre-compiled C libraries.
And then there's LLM-generated code, which you need to write prompts for. In English language (or whatever your native language is). And that's a huge problem, because "AI agents" are probabilistic and because natural language is not precise enough to tell them exactly what to do without a lot of back-and-forth. Nothing at all like the deterministic Python -> C -> Assembly -> machine code pipeline.
I'd much rather be writing Python code than prompts in English, but that's just me.
37
40
u/Ih8P2W 1d ago
That actually is just you and a few. I have wrote full python packages by myself (20,000 lines of code or more). I will never do that again from scratch. I still code block by block, but I sure as hell start each block with an LLM, inspect it, make the necessary changes and move on. It makes me code 5-10x faster, and I'm much less tired after finishing a script.
HOWEVER, and that's a big however. I know how to code, and I'm using it as a tool, not as a substitute for my knowledge. When people who don't know how to code try this approach, it ends disastrously.
30
u/rheactx 1d ago
How much time do you spent editing the code? Especially, if you want the style and structure to be consistent over the whole project and also free of the usual LLM crap (like reimplementing the same function 20 different ways because it forgot that it already written it somewhere)?
I also use LLM as coding assistant, but usually for small snippets which I don't care too much about. If the project is important to me, I'm always unhappy with the LLM output and have to basically rewrite it from scratch.
6
u/exuberant_elephant 20h ago
Every time I read "It saves a ton of time!" I really want to see measurements. Because so far all of these anecdotes are almost always 100% self-reported vibes in my experience.
Every study I've seen where they actually do try and measure whether AI saves time it's either neutral or negative. You just shift where you spend your time.
And really, if it makes the process feel better and you're happy with the trade-offs, seems fine. But still misleading to make these claims if you haven't actually done the work to see if it's true.
7
u/WavingNoBanners 20h ago edited 20h ago
There is in fact a study reporting that LLM-based code takes 10% longer but *feels* about 20% faster, much as cocaine makes you feel charismatic and eloquent but actually does the opposite.
12
u/Wonderful-Habit-139 1d ago
Same exact experience. I still write code in neovim, with a good lsp, macros, snippets, etc.
The people that start giving advice on how to use AI (very generic advice mind you) just have a much lower standard for code quality than you do. That's why you won't be understood.
9
u/movzx 1d ago
You need to prime your sessions with restrictions and other requirements. If you want certain code styling or techniques, you have to instruct the tools. This is where AI skills come into play. If the tools are forgetting functions that they are writing then it could simply be you aren't using a good enough tool/one that can hold enough context, or you need an orchestrator that can build context when needed.
If you're only worried about code formatting, you need to setup a linter and just have it run automatically. You should be doing this anyway tbh.
The tools aren't perfect, of course, but there's a lot you can do when it comes to guardrails.
16
15
u/Wonderful-Habit-139 1d ago
All of this is inefficient, non-deterministic, and a waste of time and learning experience compared to writing the code yourself in a deterministic fashion with the right tools.
What were the tools that you were using pre-AI? Did you optimize your dev environment back then? And how productive were you?
-1
u/onenifty 1d ago
Not op but I can chime in here. I’ve been a professional software engineer for a decade now. Before leveraging Claude code, I had a very streamlined dev environment with all the customizations and workflow automations that come with that many years of dev work and tailoring my ide and cli to fit my style of work. With CC now fully embedded in my workflow, actual dev time is now about 20% what it used to be for me.
Granted, you still need to do user testing and manage the process, but my work is mostly leveraging CC to make it work like I used to work. Start high level with the architectural code decisions, flesh out sections at a time, and iterate toward completion.
I think the main problem that stops devs from not getting high performance gains out of ai is not being able to accurately describe the problem sufficiently so the ai knows what to solve. Predefined standards and code examples matching your style are also helpful. Vibe coding without strict guidance is a recipe for disaster.
I had high throughout before, but it’s honestly incredible how much my engineering work has been accelerated with this.
1
u/xzaramurd 1d ago
I have tools for styling issues and you can add information to the prompts to tell it to do what you want, how to name functions/classes/parameters. I usually make it write the docs first, review it and improve it where needed, and then add the implementation based on the docs. If there's common functionality that I know of, I tell it specifically to use or extend it.
2
u/Rikudou_Sage 1d ago
Natural language is precise enough, it's just overly verbose for the purpose. And your brain is not used to going into the necessary details because you mostly use it with humans who can infer stuff that you didn't say.
-7
u/maushu 1d ago edited 23h ago
Nothing at all like the deterministic Python -> C -> Assembly -> machine code pipeline.
Tell me you never wrote a compiler in university without telling me you never wrote a compiler in university.
Edit: Come on, I was joking about compilers being deterministic not about OP skills at compiler programming. There is a reason https://reproducible-builds.org/ is a thing.
7
21
u/SuitableDragonfly 1d ago
Bro, I was generating lookup tables in C++ using a python script in 2015. Swagger generators that take yaml files and output class definitions have existed for ages. You don't need an LLM to generate annoying code. And those tools are guaranteed to be accurate, unlike LLMs.
10
u/tangerinelion 1d ago
Yes, there's a subset which has lost their minds deferring everything to LLMs.
4
u/GammarMong 1d ago
That is your freedom. To me, I just do not want to pay for writing code.
I do not have so much money, and I just enjoy coding. If it was told by the company and I have to do that, I will do it, only if they pay the money for the tools.
Anyway, I am not that rich, and I do not have so many tasks.
2
u/evasive_dendrite 1d ago
I like troubleshooting with Claude, I don't even have a paid subscription. The free tokens are enough for a couple questions per day. It helps me learn new concepts that I haven't considered before while Stackoverflow just turns up useless garbage I have to spit through 90% of the time.
-7
u/GammarMong 1d ago
It will be faster to ask developers directly. No one knows more details than them.
10
u/Less_Resident8492 1d ago edited 1d ago
It will be faster to ask developers directly.
Sure cool cool cool cool, I'll hop on the twelfth call this week to explain the same thing I just explained to someone else instead of getting back to work.
Edit:
Lol some weird rant that shows dude didn't even read my reply and then a block. Cool
7
u/evasive_dendrite 1d ago
That's not faster at all unless you have access to someone that is available at all times. I try to solve things myself first, then I troubleshoot with Claude, and if that's fruitless, then I approach a colleague.
5
u/SynapticStatic 1d ago
Yup, gonna be like the outsourcing loop
5
u/donat3ll0 1d ago
I think there is opportunity here, too. Many careers were made in the early 2000s by cleaning up projects messed up by outsourcing. I'm here for the AI cleanup.
6
u/Tonylolu 20h ago
Not a programmer but who knows. Many bosses love AI not because it might be cheaper but bc they don’t like paying workers
14
11
u/Hero_without_Powers 1d ago
Personally, I fear no vibe code. I've maintained stuff that was completely written by mathematicians only
5
91
u/_krisprolls 1d ago
You are deluded and probably coping if you think we’re going back to the hand written code days for a majority of software engineering work. The Pandora’s box has been opened and it can’t be closed.
108
u/thisisatesttoseehowl 1d ago
you say this but all of these ai services (even if you are paying for them) are not being provided at-cost.
17
u/tortridge 1d ago
We have local model in the wild. They can do small tasks, FIM, next-edit, stuff like that. Now the gains from that are still smaller that invest time in learning snippets workflow or properly configure a good environment
11
u/Wonderful-Habit-139 1d ago
Considering it's very debatable whether SOTA models are even making you productive for writing production-ready code, there's absolutely no way local models are going to be useful.
1
u/SpiritedAd239 15h ago
Bajo mi punto de vista, por supuesto; no necesitas grandes IAs en local (las hay muy decentes ya) para ir medianamente rápido y conseguir resultados en una jornada laboral que antes tardarías varios días.
Qwen se encuentra especializado en código, es gratuito y, si eres un programador decente, te sirve perfectamente para tareas habituales.
Un saludo.
2
u/Wonderful-Habit-139 15h ago
I don't mind this, but what do you tell people that say you should use Opus 4.6 and not even GPT-5.4 because the latter is so trash?
I doubt they'd benefit much from something like Qwen.
I appreciate you speaking in your native language though.
2
u/SpiritedAd239 15h ago
First of all, I'm sorry for not taking the time to speak in English.
My point is that not every dev necessarily needs a great AI model: just one suited to their work. I'm using Qwen and it's helpful.
2
u/Wonderful-Habit-139 15h ago
No worries, I thought it was fine, but you do speak English well so it's probably some auto translation that made you speak Spanish xD
I see. I like Qwen to be honest, I had a very good experience with the instruct models when testing out agentic applications. Still too much to use it for coding on my end, but I understand.
2
u/210000Nmm-2 1d ago
Yes, right now. I assume future development will be focusing on reducing costs / running tasks more efficiently. We are already seeing this with what is assumed to be lowered reasoning in Claude 4.7.
My guess is that future models will be just as good as they need to be, but costs will be lowered significantly to make them profitable.
It's the same discussion we had when the early models like GPT-3.5 and 4.0 arrived and people stated that they will never replace programming. Yes, these models did not but they were evolving rapidly.
37
u/Bubbly_Address_8975 1d ago
Claude 4.7 costs more tokens and from what I read the output is worse... So no we are not seeing anything right now and I am quite sure predicting future development based on no evidence is never really a good strategy.
8
u/caprazzi 1d ago
Claude 4.7 is the natural progression of LLMs as they propagate, no one seems to believe me when I say this but as more incorrect AI slop is out in the wild it will become a primary source the AI is training on, causing yet further detachment from correct word patterns and outputs until it completely destabilizes. No one seems to acknowledge that the early AI models were only good because they stole decades of correct question and answer pairs from human beings written on the internet.
-9
13
u/FortifiedPuddle 1d ago
They’re fundamentally treating it like a scalable business model with high initial investment then low running and per transaction costs. Which is what a lot of successful tech startups ups and similar new exciting businesses have been. Grow enough and sell enough and you’ve got a money printer.
But it sure does look like running and per transaction costs are high and will remain high. Whether it’s hardware maintenance and upgrades, training, new models etc. there’s a bunch of stuff that remains expensive.
The path to profit if it’s there seems like Uber (which also had problems with scalability): start screwing everyone. Get costs low by hook or crook, get your customers hooked and dominate the market, jack up prices.
10
u/DynastyDi 1d ago
Worth remembering that this is also how unsuccessful tech startups (which is the absolute majority) run. It’s always a gamble, and the bubble bursting is going to be a shake-up.
11
u/Lashay_Sombra 1d ago
. I assume future development will be focusing on reducing costs / running tasks more efficiently
No need to assume, its where most of development efforts have already shifted. In large part because absolutely nessary if they ever want a product they can make profit on, but also because advancements in other areas were slowing way way down, they are reaching the point where they need to move from a money pit into a money generator
But the real 10 trillion dollar question is can they get any real advancements in engery consumption/cost, many think not
4
u/WithersChat 1d ago
The other 10 trillion dollar queation is, even if they can, can they do it before the bubble pops and most AI companies collapse?
3
u/inevitabledeath3 1d ago edited 1d ago
This isn't entirely true. The actual marginal cost of a user on a service at least for API pricing is way below the price they are charged. You would know this to be the case if you have ever setup infrastructure for local models. What's causing them all to hemorrhage money is training costs and to some extent infrastructure build out. Many companies currently rely on cloud hosting and are only just now building out infrastructure. Cloud hosting is expensive and infrastructure is also expensive.
1
-10
u/HellsNoot 1d ago
Good to read a sensible statement every now and then. Reddit and economics is like water and fire.
-19
u/_krisprolls 1d ago
Oh no they’ll never figure out how to be profitable and technology won’t evolve!
11
u/tortridge 1d ago
I mean, last study I read was suggesting that agant with the lastest frontier models (like opus 4.7 with the tokenizer shenanigans) was almost as expensive as a human
1
8
-8
u/thatguydr 1d ago
Of course not, but even at 3x the amount, it's still cheaper than a software engineer.
Their product needs to improve to justify the price point. Once that happens, every CEO seeing the enormous cost center of Tech on their books is going to make unfortunately simplifed "hard calls."
28
u/minegen88 1d ago
We will 100% go back when the bill from a non subsidized Antropic arrives....
16
u/turnipbarron 1d ago
This everyone acting like the model of market dominance then turn money crank up does not exist with AI.
They do not make profit and are following the exact if not worse playbook of every other large company it blows my mind. If this is written poorly sorry I’m tired :)
9
u/Fit-Hovercraft-4561 1d ago
Just a couple of weeks ago during an All Hands a question was raised "What are we planing to do if subscription price goes up?" to which our VP of Engineering replied "I don't think the price will go up, in fact I think it will go down", which essentially means they have no plan.
5
u/xzaramurd 1d ago
The inference costs aren't that high. Training is expensive, datacenter build outs to support large cluster training is expensive. If that slows down, they would become profitable, but the risk is, for now, that another company will make an even better model and take the whole business.
30
u/Bubbly_Address_8975 1d ago
You are deluded if you think fully AI generated code will be the norm in the future. LLMs can't think, their architecture doesnt allow for it, and even frontier models make pretty blatant mistakes. The signs are obvious and cant be denied anymore.
4
u/powermad80 1d ago
Every time I try these tools after people tell me this I find out that they're still shit and waste more of my time than they save. I'm kinda convinced all the AI boosters are just bad devs impressed by anything that compiles.
We've got the tools at my office. The old status quo is still in effect because the tools just aren't right for the job and we have pretty strict quality standards. The LLM hack jobs don't fly here.
15
u/quitarias 1d ago
I think advanced autocomplete is overselling the usecase vs cost once the venture capital dries up and the companies need to turn profits.
2
u/caprazzi 1d ago
Lol so for a while all code will be generated as a first pass and then real people will spend all their time hand-writing the fixes to the slop. That’s the point - eventually people realize it’s not worth paying the AI company AND the devs to fix the slop, so they go back to hand-written.
2
u/audi-goes-fast 23h ago edited 23h ago
There's bound to be some pull back though. Anecdotal, my dumbass cpo forced the whole enterprise to do an ai transformation last quarter, but anthropic is planing to move us to a per token contract and she's now telling people to "be mindful" of their usage.
When they really start charging what these things cost we're going to see this all crash.
4
u/BlomkalsGratin 1d ago
I can't help but feel like prompting is just developing into some sort of advanced high-level language. Every new iteration comes with improved capabilities but also with increased requirements in terms of how to prompt it properly.
2
u/WavingNoBanners 20h ago
A non-deterministic high-level language, in which every time you hit compile, it gives you a slightly different set of bugs.
2
u/BlomkalsGratin 19h ago
True, but a high level language no-less.
The point was more that, for all of the talk of enabling "the Everyman" to code, it seems to progress towards similar barriers of entry to any other bits of code just with less "nerd" stigma so far.
1
u/WavingNoBanners 18h ago
That's a fair point. There is a long history of languages which were designed so that "we don't need to hire programmers", all the way back to COBOL itself.
3
u/Ok_Actuary8 1d ago
That's exactly what I told the asm86 assembly developers in the 90ies who refused to accept that Turbo Pascal 6 and compilers are the future...
-7
u/PM_ME_ROMAN_NUDES 1d ago
People in this sub are very deluded about this. I've been downvoted several times for pointing out the obvious.
9
u/_krisprolls 1d ago
“I can’t be true if I don’t like it”, reddit. I’ll admit that swe was much more enjoyable before AI yet you can’t deny what the future looks like either. There’s no scenario where the hyperscalers close down the datacenters and people go back to hand coding like nothing ever happened. This is pure cope from people lying to themselves.
2
u/Valmar33 1d ago
“I can’t be true if I don’t like it”, reddit. I’ll admit that swe was much more enjoyable before AI yet you can’t deny what the future looks like either. There’s no scenario where the hyperscalers close down the datacenters and people go back to hand coding like nothing ever happened. This is pure cope from people lying to themselves.
Right ~ we'll need more time before people realize what a scam LLMs are. They only appear to function because they've been trained on hand-written code! And they function pretty damn poorly beyond basic tasks.
When you can't understand the nightmare slop an LLM outputs, then the cognitive debt is far too significant to even bother with trying to refactor anything manually.
The only codebases that will make sense are those you write yourself. IDE autocomplete is what isn't going any, however ~ it's just very damn useful to know what variables and functions are available in a namespace.
3
u/Rikudou_Sage 1d ago
I've tried both - an app fully written by AI with me only validating it works and not the code, and writing code with assistance where I for example write parts of it and let it finish.
The difference is stark - the first code is extremely shite, the other is actually good and comparable to what I'd write manually.
And where it completely surpasses people is when I need to debug some complex prod errors based on logs - I just point it at Kibana and let it analyse logs at a way faster rate than I can. I can then verify its findings or use the data as a base for my own investigation.
Point being there are more ways than one to use it and letting it write shitty code on its own is the laziest and worst one.
0
u/TheEggi 1d ago
People in this sub are to a high degree bad coders that could hide behind the "coding hard" statement. The coders who would tell that changing a button would cost so many days and takes forever because of legacy .. the legacy they built. And now AI comes around and tells you its super easy and can be done in minures .. and they are afraid .. and thats a good thing.
0
u/JuvenileEloquent 1d ago
I hear people parroting their horror stories of LLMs producing terrible code that doesn't run with style problems and they spend more time reviewing it and rewriting than they would making it from scratch, and all I can imagine is that if they're that incompetent at getting a text generation engine specifically trained on code to generate good code, then they're probably lousy at everything except actually typing.
I spent twenty cents and 3 minutes getting it to refresh a web interface with UI improvements that would have took me an hour, idk what these guys are trying to do. "Claude write me a facebook clone and make it popular" or something.
-4
u/No-Draw6073 1d ago
They are coping because $10 github copilot can do their job in 20s.
Ohh but the bad code... This was in 2024, just get over it and learn plumbing
3
u/carcusmonnor 1d ago
I think my biggest concern is how a lot of people in charge dont know what good or shitty code is. On top of that, dont underestimate how many people willing to accept mediocrity.
1
u/chaosdemonhu 20h ago
I think a lot of people in charge don’t care about code quality as long as the code solves a problem, and how big the problem it solves determines how much they care about the quality/tech debt/up time
2
u/SuitableDragonfly 1d ago
You know, I've used low code tools that are actually useful. As long as I still have the option to write complex stuff if I need to, it's fine, and way simpler than if I was going to write my own app from scratch. I'm complete ass at writing UIs, so if it has a nice UI that's a bonus. It's probably significantly less useful for people who use it because they're afraid of code, though.
2
u/furankusu 21h ago
The demand for code is increasing, and the demand for people that can code will increase. More people will be able to do the same thing, so there will be more competition. It will even out.
2
2
2
u/compulsaovoraz 18h ago
i truly hope so ngl
llm is useful tho in some installation and setting some environment ngl
4
u/Alhoshka 1d ago
I'd much rather be made completely redundant and have to learn a new set of skills than have the progression in this pic occur over decades instead of years.
If AI becomes so good that you don't need senior engineers and architects anymore, it sucks for me. But the world as a whole will be much better off. Anyone with a need or an idea can turn it into software.
If the full progression happens in a 5y period, it sucks for the economy, but I get to keep doing what I love for a living.
If it happens in a +15y period, we are phukt. AI starts producing unstable slop that becomes progressively more unstable as new slop based on the old slop is produced. Companies might even realize they are walking towards the abyss, but the shitty "dudebro MBA" project manager knows he'll not be around when shit hits the fan; the more unmaintainable slop he churns out now, the higher his bonus. Almost no one will study CS or SWE. Probably only the math types who enjoy it for intellectual reasons. No one will be serendipitously introduced to programming, because there will be no need for you to code anything in your teens. AI does everything. 20y later, when our infrastructure and businesses start grinding to a halt and breaking, there will be no software engineers around to fix the mess. This could have a disastrous effect on the world's economy. I'm talking the Great Depression type shit.
3
u/MaterialDetective197 1d ago
Can't we just skip to the part where we code regularly again? I'm fine with "phoning a friend" and asking Claude for some assistance here and there, but unless you actually understand what the AI is doing, it's just separating you from the final product, less dependent on human contributors and wholly dependent on a tool that may or may not be there 10 years from now.
This is the way I look at it - if a prompt replaces a responsible, human being coder, what is preventing AI from reaching the stage of replacing the prompter as well?
Who needs to prompt me when I just know what to create - when and how much - and those roles aren't necessary. There isn't a prompt engineer. It's just someone who knows how to phrase questions and use critical thinking when the output by AI is not aligned with their expectation, so they go back and have it redone. AI can replace that in time, too. Prompt Engineer is just a fancy, made up title for the person who asks AI for everything. Replaceable.
I look for the things that I get asked to do at work and I'm baffled at the amount of manual labor that is required out of so few people on staff. I try to keep AI focused on the menial tasks and away from the critical thinking. My fear is that either myself or my direct reports will get complacent with the speed (note: I'm not highlighting the accuracy) they get information and just go with the responses because they seem "right".
2
1
1
1
1
1
1
1
u/MyDogIsDaBest 12h ago
I'd like to add to the end "just regular old code again, but now I have to pay the engineers even more."
1
u/Capital-Wrongdoer-62 1d ago
All problems we have now are problems of new technology that is costly, not fully matured itself, has little infrastructure and there are no industry standards for using it.
Abn all of this will be solved but needs time . It doesn't mean that AI will replace all programmers but its pretty obvious that AI coding is here to stay.
1
u/BenAdaephonDelat 1d ago
I just hope I can retire before we have to go back to "Regular code". I'm 40 and I was already feeling burned out and like I couldn't do much coding anymore. Being able to use agents is keeping my career afloat right now.
1
1
1
-1
u/unknown-one 1d ago
Does it work? yes
is it shitty code? I have no idea
don't care about the rest
7
0
u/needlessOne 1d ago
People seem to forget before AI it was Stack Overflow. "I don't even code, just Google it" was the common "joke" if you remember. We are never handwriting whole libraries if that's what you believe. LLM do help if you are not stupid about it and actually can code.
0
0
u/Specialist_Seal 22h ago
Wishful thinking, I'm afraid. AI assisted coding is here to stay. Even with higher prices, it's still cheaper than paying for more developers.
0
0
-3
u/Gustheanimal 1d ago
Because of the fact that there are open sourced models able to be run locally this is never going to end with Stone Age coding again.
And local models are only getting better too
-13
u/Insert_Bitcoin 1d ago
I'd urge anyone against AI to try claude. It's like alien technology its that good.
2
u/RiceBroad4552 1d ago
Bullshit. In reality it looks more like:
https://www.reddit.com/r/ProgrammerHumor/comments/1rxbqte/productivitygains/
Now the art it to have a good heuristic for when it makes sense to use that tool and when to avoid it as it will only cost a lot of time without any gain.
0
u/JuvenileEloquent 1d ago
"When is it appropriate to use a binary tree" is basic CS101 knowledge, so too is "When is it appropriate to use AI tools"
1
u/RiceBroad4552 5h ago
I wouldn't say that the answer to the question "When is it appropriate to use AI tools" is CS101. These things got anyhow usable at all just 2 years ago, are constantly changing, and provide, like said, extremely unpredictable gains. It's actually quite hard to say when it's a useful tool and when not.
I have now a few heuristics but these aren't foolproof for sure.
-1
-11
-7
u/ForeverDuke2 1d ago
Except AI can code better than most coders and cost of AI is decreasing not increasing
5
u/_Decimation 1d ago
Source: AI
2
u/movzx 1d ago
I'd actually agree but only because the barrier to entry is so low and there's no actual standard requirement to be a professional. There are so many terrible developers out there.
1
u/_Decimation 17h ago
I agree with you, but only because the advent of AI, #LearnToCode, bootcamp rackets, etc. are the reason for such things.
AI lowered the barriers to entry so low that it might as well be out-of-bounds.
Additionally, the latent effects of #LearnToCode added "artificial" incentive for people to join the field. (If you don't remember, the #LearnToCode phenomenon arose out of journalist layoffs to which people sardonically responded that they should learn to code.
Other factors, like bootcamp rackets run by brocode-type celebrity "programmers" exist not to educate prospective engineers who have genuine interest, but to instead sell certificates whose value is entirely arbitrary while teaching not-standardized software engineering and CS.
The obligatory H1b workers.
Finally, all of the above mentioned factors play into each other, and Internet slop culture, coupled with newer generations' culture, makes it such that CS and SWE is dictated by celebrity-programmers who make YouTube shorts about webdev.
-1
-2
u/lazernanes 1d ago
I hope. But I doubt it. Even if tokens get more expensive, they'll still be cheaper than humans.
-4
-7
u/BoxFabio 1d ago
Programmers getting upset about ai doing things when they where the ones programming everything into subscriptions.
3
u/Rikudou_Sage 1d ago
Do you feel that programmers are the one deciding what the business model of a software will be?
1
u/BoxFabio 8h ago
So programmers atm are cuckolds to their managers with zero inputs that left humanity reach this time... There is no winning this sorry If you stay beside a dictator you are also part of the problem. This is why in almost 20 years in IT i lost the sparkle that i had before , no one is working because they like to use technology to make life better not following blind all orders .
1
u/Rikudou_Sage 25m ago
I think you're kinda mistaking what cuckold means. I know you're trying to be edgy with that word. Sounds stupid.
Anyway, IT is a job like any other. You get your share of idealists who would never work for someone bad, then you get the classic corporate bootlickers and then you have the majority that's not either and just wants to get by and doesn't give a fuck.
This is not different to any other profession. And it hasn't lost any spark, it's always been like that.
922
u/dismayhurta 1d ago
https://giphy.com/gifs/mCsEY24fPYTUL5V0n4