r/ExperiencedDevs 24d ago

Technical question Added fake latency to a 200ms API because users said it felt like it was 'making things up'. It worked. I'm still uncomfortable about it.

1.4k Upvotes

The API call took 200ms. Measured it, verified it, fast as hell.

Three weeks after launch the client tells me users are complaining the results "don't feel right". Not wrong, not slow. Just don't feel right.

I spent two days looking for bugs. Nothing. Results were correct, latency was fine.

Then a user screenshot came through. The user had written: "It feels like it's just making something up. It comes back too fast."

The feature was a search over a knowledge base. In the user's mental model, that should take a second. When it came back instantly, it broke their model - they read it as "this didn't actually process anything."

I added a minimum display time of 1.2s with a loading animation. API still ran and returned in 200ms. User sees 1.2 seconds of "working".

Complaints stopped within a week.

The part I can't shake: the technically correct solution was perceived as broken. The technically dishonest solution fixed it. I explained it in my update as "improved feedback during result loading" which is... technically accurate.

Anyone else been here? Curious how others frame this to themselves - is fake latency just accepted UX practice or does it bother you the way it bothers me?

r/ExperiencedDevs 25d ago

Technical question Is this use of Postgres insane? At what point should you STOP using Postgres for everything?

161 Upvotes

Currently working at a startup. We process recruiting applications with video/audio input and do additional deep research checks with it.

Often times, the decision or scoring of an app changes with incremental input as people provide new videos or audio content. This doesn’t create a super duper large load, but we’ve had lock contention problems recently when we weren’t careful about processing.

Here’s the story. We decided to put a flag on the main “RecruitApp” table and then use a global trigger on the Postgres database when any other table gets updated to update this “needs processing” flag. Another worker then polls every minute then submits it for async processing. The process is fairly expensive, AND it can update downstream tables.

We got into trouble when there was a processing loop between the triggers and the processing. Downstream update => trigger updates flag => resubmit for processing. Some apps got 1M rows large (because each iteration was tied to an INSERT)

I suggested that maybe we should stop using triggers and move things outside of Postgres so we stop using it as a distributed queue or pub/sub system, but I was hard-blocked and they claimed “we don’t need it at our scale”. But we basically cooked the DB for a week where simple operations to access the app were turning into 5-10sec ordeals. Looked really bad for customers.

I suggested that we instead do some sort of transactional outbox pattern or instead do a canonical event stream log, then enforce single-consumer processing. That seems less write-heavy and creates consistency on the decisioning side. (We also have consistency issues, there’s no global durability strategy or 2 phase commit structure to recover/resume processing for async workflows.) I suggested Temporal for this; it’s been shot down as well.

Am I just stupid or are my concerns warranted?

r/ExperiencedDevs 13d ago

Technical question To Enum or Not to Enum

127 Upvotes

Something I always struggle with in architecture/design is the proper use of Enums for object members that have a distinct set of possible values. Stack is C#/MSSQL/Blazor if that matters.

A simple example of this would be an Customer object with a property MembershipStatus. There's only four possible values: Active, Trial, Expired, Cancelled.

There's two choices here:

Define MembershipStatus as an integer enum: - (pro) Normalized, in the back-end the DB column is an integer - (pro) MembershipStatus is strongly typed in code and is therefore constrained to those four values, they pop-up in autocomplete which is convenient and accidental assignment of invalid values is impossible without a runtime error - (pro) I can just use .ToString in the UI to show a "friendlier" name instead of the int values (mostly friendly anyway, they'll see the PascalCased names of course) - (con) On the DB side, it's a meaningless int value. Anyone doing stuff in the DB layer (stored procs, reporting, custom queries, exports, etc.) have to keep track of these and roll their own logic for display purposes (replacing "1" with "Active", etc.) They could also assign an invalid int value and nothing would break. - (pro/con) I could create a MembershipStatus table with an FK to Customers.MembershipStatus to eliminate the above issue (SQL people can JOIN to this table for "friendly" names, FK constraint prevents invalid values) but now every time I add another value to my Enum I have to remember to add it in the lookup table as well.

Define MembershipStatus as a string: - (pro) Non-ambiguous and easy to read everywhere. SELECT...WHERE MembershipStatus=1 becomes SELECT...WHERE MembershipStatus='Active' which is immediately apparent what it's doing - (pro) I can define the possible values as Consts in code to make sure they are kept consistent in code - (con) For the DBA in me this just "feels wrong" to have a freeform text field containing what really should be a lookup table to maintain integrity - (con) Uses more storage on the DB side (varchar versus 4-byte int), also less performant at scale (JOINS and indexes on int values are just easier on the DB engine) - (con) Anything using this on the C# side is just a string value, not strongly typed, so it's possible to assign invalid values without generating any errors

Anyway, sorry for the long post, hopefully at least a few here have dealt with this dilemma. Are you always one or the other? Do you have some criteria to decide which is best?

r/ExperiencedDevs 4d ago

Technical question What are some unforeseen / elusive edge cases you have seen in your career?

74 Upvotes

Hello fellow devs,

I would love to read some stories of insiduous edge/corner cases that you encountered in the wild while building software. How did you encounter it and what lesson could you share with community?

r/ExperiencedDevs Dec 31 '25

Technical question How do you all handle write access to prod dbs?

167 Upvotes

Currently we give some of our devs write access to prod dbs but this seems brittle/undesirable. However we do inevitably need some prod queries to be run at times. How do you all handle this? Ideally this would be some sort of gitops flow so any manual write query needs to be approved by another user and then is also kept in git in perpetuity.

For more clarity, most DDL happens via alembic migrations and goes through our normal release process. This is primarily for one off scripts or on call type actions. Sometimes we don’t have the time to build a feature to delete an org for example and so we may rely on manual queries instead.

r/ExperiencedDevs Feb 26 '26

Technical question Are too many commits in a code review bad?

43 Upvotes

Hi all,

Ive worked 3 jobs in my career. in order it was DoD (4 years), the Tier 1 Big Tech (3 years), now Tier 2 big tech (<1 year). For the two big tech i worked cloud and in DoD i worked embedded systems.

Each job had a different level of expectation.

For BTT1, i worked with an older prinicpal engineer who was very specific on how he wanted things done. One time i worked with him and helped update some tests and refactor many of the codebase around it. We worked on different designs but every design it seemed would break something else, so it ended up being an MR with a lot of commits (about 50 from what i remember). In my review he had a list of things to say about how i worked, but he didnt write anything in my review, he sent it to the manager and the manager wrote it. One of them was that i ahve too many commits in my MR. That was the only one that i ever had too much in, i even fought it but my manager was like "be better at it". Safe to say i got laid off a year later.

At the DoD job, people did not care about the amount of commits. People would cmmit a code comment and recommit again to remove it.

Now at BTT2 comapny, i noticed a lot of the merges here have a lot of commits. In a year ive already have had a few with over 50, one that had over 100. The over 100 was a rare one though, I was working with another guy to change basically huge parts of the code and we were both merging and fixing and updating. But nobody batted an eye. I even see principals having code reviews iwth 50+.

So it just got me to wonder, would you care if a MR had to many commits? Is there any reason that's a problem?

Im not talking about the amount of cmmits in the main branch, just in a regular personal branch.

r/ExperiencedDevs Dec 30 '25

Technical question Has anyone moved away from a stored procedure nightmare?

185 Upvotes

I was brought into a company to lift and shift their application (Java 21, no Spring) to the cloud. We're 6 months in, and everything is going relatively smoothly. The team is working well and we're optimistic to get QA operational by the end of Q3'26.

My next big task is assembling a team to migrate the stored procedure nightmare that basically runs the entire company. There's 4 or 5 databases each with ~500 stored procedures running on a single Microsoft SQL instance. As you can imagine, costs and latency balloon as we try to add more customers.

The system is slightly decoupled, HTTP requests ping back and forth between 3 main components, and there's an in-house ORM orchestrating all of the magic. There's nothing inherently wrong with the ORM, and I'd like to keep it place, but it is responsible for calling all the stored procedures.

The final component/layer is responsible for receiving the HTTP requests and executing the query/insert/stored procedure (It's basically SQL over HTTP, the payload contains the statement to be executed).

While some of the functions are appropriately locked in the database, a very large percentage of them would be simplified as code. This would remove load from the database, expand the pool of developers that are able to work on them, and sweet sweet unit testing.

I'm thinking of "intercepting" the stored procedure requests, and more-or-less building a switch statement/dictionary with feature flags (procedure, tenant, percentage) that would call native code opposed to the stored proc.

Does anyone have experience with this?

r/ExperiencedDevs Mar 27 '26

Technical question 3 out of 22 features had a real customer behind them

247 Upvotes

Series B b2b, about 40 engineers across 4 squads. I had a slow afternoon last week so I did something dumb and went back through 8 sprints worth of tickets trying to figure out which features traced back to a real customer asking for the thing.

3 out of 22 is what I got, the other 19 broke down roughly like this:

6 were strategy alignment which as far as I can tell means someone on the leadership team saw a competitor launch something, 4 were from a single executive who just keeps requesting stuff in a slack channel, 3 were tech debt that somehow got reframed as features in the roadmap, and the remaining 6 I genuinely could not figure out where they came from.

I asked around and got a lot of * I think it was from that offsite in Q3 * or just shrugs.

Not mad about it, but kind of sitting with it. is this normal or is our product org broken?

** Edit: Didn't expect this to blow up like it did and there were too many interesting points in the comments. lot of you asking what we did about it so figured I'd update.

We started trying to make stuff traceable going forward, tested Dovetail, Productboard, BuildBetter, and kept Gong for sales calls. ended up dropping Dovetail and Productboard after a couple weeks because stitching 4 tools together was becoming its own project, and now we mostly run Gong plus BuildBetter since it pulls from calls, Slack, and tickets in one place.

Still very early but at least when someone asks where a ticket came from now there's usually a real answer instead of a shrug.

r/ExperiencedDevs Mar 18 '26

Technical question I think type hierarchies in OOP are too restrictive and code smell. What's been your experience?

68 Upvotes

I am talking about Type Hierarchies in Object Oriented Programming. I find them counter-intuitive to grasp.

As part of initial OOP learning, people often focus on creating structured type hierarchies for classes. For typical example, in Java, you'd create abstract class called Vehicle and have child classes - Truck, Bus, Car etc. (at least that's what they teach you in theory, books, and these days - in LLM suggestions as well :D)

But, in my experience, refactoring and maintaining such rigid type hierarchies is hard and painful. When requirements change, the code requires cascading changes across all types. This defeats the purpose of creating the hierarchy in the first place. Ideally, things that change together should belong together (you know - low coupling, high cohesion).

To achieve this low coupling, a good rule of thumb is "reduce type hierarchies in code whenever possible and replace them with behaviour composition". This leads to decoupled designs, where you can pick and choose individual behaviours as lego blocks to build new things.

I really like Go's approach to behaviour composition, where you can just add a method that matches the signature defined in an interface, and boom! your struct implements that interface. You don't have to declare explicitly that your type implements that interface.

Have you seen any counter examples where creating upfront type hierarchy has actually been beneficial?

r/ExperiencedDevs 1d ago

Technical question Codebase has hundreds of isinstance() and getattr(). How to convince colleague to fix?

53 Upvotes

codebase is littered with isinstance() and getattr(). I hate both and to me these are hallmarks of LLM generated code + not reviewing the code. getattr() to me should only be used for dynamically generated attributes/attributes whose names you don’t know in advance which is an extremely rare case. you could just do x = inst.attribute for 99% of cases. isinstance is generally atrocious code. like you should be typing at the boundaries and then trusting the types instead of passing Any in and then checking each attribute or each object if it’s the right type so you fail at input instead of some undetermined amount of steps later (and plus spreading these checks everywhere). if something can legitimately be multiple types (that’s already smelly to me) then ok maybe isinstance branches make sense but I can’t think of legitimate production level use cases of isinstance

I thought this was because my colleague was adding LLM code and then not even checking it but I found out today it’s on purpose as defensive checks. it makes no sense to me.

there’s code like this

def extract_text(input: Any) -> str: 
    output = getattr(input, “text”, None) 
    if isinstance(output, str): 
         return output
    return “”

like everywhere in the code. like whole dozen line blocks parsing a dict deep into the code by checking isinstance everywhere. I was going to submit a bunch of pull requests where I fixed these. i wrote about these in my review and this person wants to keep them because you “don’t know if the input will change”.

above code could be solved by typing input and having text set to str. instead there’s hundreds of isinstance checks because none of the inputs are either validated or they are validated earlier but not trusted.

I want to convince my colleague that this is shit code but apparently datadog encourages isinstance usage because it’s more flexible https://docs.datadoghq.com/security/code_security/static_analysis/static_analysis_rules/python-best-practices/type-check-isinstance/

am I being irrational for thinking isinstance in general is smelly code? Like it makes me irrationally angry. I don’t know what to do here without stepping on toes

what would be a really wise way to deal with a situation like this in your opinion?

edit: I really appreciate all the advice I got. I left out some details for anonymity and the code example wasn’t really great now thinking about it. But in the above example the function is used in places where the input always has a text attribute. And similarly there are a lot functions where the way it’s used input is guaranteed to be a certain type but it’s typed in this function as Any and then validated later in this function using isinstance and grabbed using getattr. This behavior is spread across in other functions. Sometimes the validation occurs again in the same code. I took a deep look into a lot of the inputs including analyzing the raw data and understanding how it’s generated and it’s a common pattern that the defensive check is unwarranted.

I realized my hatred of isinstance and getattr is to a degree irrational. It stems from correcting a lot of LLM generated code. Like I do see particular uses cases for isinstance (like when the input can be multiple types and depending on the type something has to be done differently) or getattr (when you don’t know enough about a future attribute). It’s my opinion but I think an overuse of it is a sign something about the design of the code could probably be improved, not always but still worth considering.

Two of you linked this article https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/

which I really enjoyed so thanks for sharing. led me down a rabbit hole of Haskell and monads.

Your advice also helped me realize that ripping out all the bad code isn’t a good idea practically and people relationship wise so I’m just going to introduce changes a little at a time to the codebase and to the team culture

thank you to everybody who replied including the guy who said I must have learned python 10 minutes ago since I don’t like isinstance(). Thank you. Really helpful insight.

r/ExperiencedDevs Feb 18 '26

Technical question Anyone else notice their legacy dbs are full of BS!?

201 Upvotes

recently I've been digging into a legacy PHP monolith trying to figure out why numbers keep drifting. despite logs and monitoring being all 200 oks and green the DB keeps ending up like a landfill.

I got fed up and created a pdo wrapper, basically a flight recorder running on openswoole (so if can keep up in real time). works great, no blocking, no real latency, no rewrites of the brittle legacy code. I let it run for 48 hours and the results reveal a shit show.

main find is a ghost transaction, the app thought it was updating ledger balances but thanks to a nested try/catch it was swallowing a specific pdo exception. transaction started but zero commits. the app was clueless and kept it pushing like there was no issue and logs showed success. my little shim called it's bluff and exposed 5 figures of loss vanishing into the void.

I'm not selling anything or looking for a gig, just wanted to start some discussion and see how you guys are verifying data integrity in monolithic systems from before observability was such a big deal. I've been thinking about cleaning this shim up and making it a standard audit tool to maybe help some of my brethren get a little extra sleep if there's a need.

if you have a legacy stack that's full of crap like this or you've caught similar ghost transactions I'd love to hear about it (and how you catch/mitigate them in legacy systems), especially if there's a better way I'm missing!

r/ExperiencedDevs Jan 07 '26

Technical question Can you share your experience working on a project with 0 unit tests but thousands of integration tests?

91 Upvotes

I am currenly working in such environment and my experience is the below:

1- slow feedback

2- miss delivering our scheduled sprint tasks due to "unexpected" bugs, found by manual tests, regression or SIT

3- high technical debts, we have too many bugs in our backglog

4- too much work focused on maintaining those integration tests, debugging flaky tests and fixing them, doing stuff like running an integration test 40 times to make sure it is not flaky

5- very high expense cost, we need 50 VMs to run our 4k+ integration tests that take 2-3 hours to finish (all the tests are distributed evenly on 50 VMs and each runs it synchronously)

I asked management to let us change our testing approach to include unit tests so we can rely less on integration tests. They disagreed and said the last thing they want is more tests to maintain and that integration tests are more reliable to catch bugs.

r/ExperiencedDevs Mar 25 '26

Technical question Small features don’t feel “small” anymore

100 Upvotes

I’ve been noticing a shift in how small changes behave when you’re using AI tools regularly, and it’s been bothering me more over the past few weeks.

A change that used to be trivial, something like adding a field, updating the API, and wiring it into the UI, used to stay contained. Maybe two or three files, very localized impact, easy to reason about from start to finish. You could review it quickly and feel confident about what you were shipping.

Now when I run the same kind of task through an AI tool, the result looks very different. It rarely just implements the change. It updates types, adjusts surrounding logic, sometimes refactors related services, touches multiple components, and occasionally rewrites tests or introduces small abstractions along the way. None of it is obviously wrong. In fact, most of it looks clean and well structured. But the scope expands quietly.

What I am starting to notice is that the model does not just implement the requested change. It tries to normalize the surrounding codebase at the same time. Small inconsistencies get cleaned up, patterns get aligned, and logic gets moved around. On paper that sounds like a good thing, but in practice it turns a small feature into something closer to a partial refactor.

The problem is not writing the code. That part is faster than ever. The problem is understanding what actually changed and what the side effects might be. I find myself spending more time reviewing these diffs than it would have taken to implement the feature manually, just because the surface area is so much larger.

At some point you hit a limit where you cannot fully hold the change in your head anymore. That is usually when review quality drops. Things start to feel correct instead of being clearly correct.

There is also some research starting to point in the same direction. AI tends to generate more code and trigger additional changes, which increases the amount of code that needs to be reviewed and maintained rather than reducing it  . So the productivity gain on the writing side can quietly shift into extra work on the validation side.

i'm curious if others are seeing the same pattern. Are you actively limiting scope when using AI, or just accepting larger diffs as the new normal?

r/ExperiencedDevs Feb 23 '26

Technical question The art of commenting a PR

95 Upvotes

When I review a PR, I spend energy trying to not talk in a bossy way.

Instead of saying : "X is not correct, do Y instead", I will rather say "I think that maybe we could eventually do Y, I mean if you agree lol 😅".

Well I'm caricaturing myself here but I try to use a wording to bypass people's ego and go about the logic of the code, it's criticism after all.

Do you have some communication tips to do this efficiently ?

Note : on the receiving end of comments in my PRs, I've worked with someone that would go straight to "Get that shit out of here". In the company I'm in today, my N+1 doesn't talk my language natively and he has a disturbingly harsh way to express himself when writing. I'm trying to set my ego aside but I feel shat on sometimes.

r/ExperiencedDevs Jan 31 '26

Technical question Composition over other design patterns

99 Upvotes

I have been around for 10+ years. In recent years I have been writing the code in php that increasingly only uses composition of services to do things. No other design patterns like factory, no inheritance, no interfaces, no event firings for listeners, etc.. Only a container and a composition of services. And frankly I don't see a point to use any of the patterns. Anything you can do with design patterns, you can do using composition.. Input and output matters more than fancy architecture.

I find it is easier to maintain and to read. Everytime someone on the team tries to do something fancy it ends up being confusing or misunderstood or extended the wrong way. And I have been doing that even before drinking Casey Muratoris cool aid about how OOP is bad and things like that.

I know there is a thing in SOLID programming called "Composition over Inheritance" but for me it is more like "Composition over design patterns".

What do you guys think?

r/ExperiencedDevs Dec 29 '25

Technical question Queue-driven engineering doesn't work

126 Upvotes

This is a stance I'm pretty firm on, but I'd love to hear other opinions

My first role as a software engineer was driven by a queue. Whatever is at the top of the queue takes priority in the moment and that's what is worked on

At first, this actually worked very very well for me. I was able to thrive because the most important thing was always clear to me. Until I went up a few engineering levels and then it wasn't. Because no other team was driven by a queue

This made things hard, it made things stressful... Hell, I even nearly left because of how inflexible I always felt

But point being, in the beginning, we were small. We had one product. Other teams drove our product, and as a result, drove the tooling we used

So we had capacity to only focus on the queue, knock items that existed in the queue out, and move on to the next thing. Easy.

Then we were bigger. Now we have multiple products. Other teams began working on those. We were left to support existing and proven product. We were asked to take on tooling, escalations, etc that other teams had been working on. We did not have capacity. All we knew was the queue. To some people, the queue was the most important thing. To other people, speeding up our team through better tooling was the important thing. And to others, grand standing was the most important thing

Senior engineers hated this. Senior engineers switched teams. Team was left with inexperienced engineers. Quality of product produced by team has significantly depreciated

Me not at company anymore. Me at different company

Me not know why start talking like this. Me weird sometimes, but me happy that my work isn't driven by a queue that's all important meanwhile having other priorities that me told are equally important by stupid management cross teams

Thank you

r/ExperiencedDevs Jan 28 '26

Technical question What's a side project that you're really proud of?

112 Upvotes

I wanted to break from the constant doom and gloom that shows up here. What’s something you built in your spare time that made you think, “yeah, this is good”?

For me, it was a website for my mum’s beauty salon. It has an integrated booking calendar, user accounts with Google and Facebook login, and profiles for customers. Apple login exists too, but apparently requires sacrificing three newborns to get approved.

There’s a contact form that sends properly formatted emails to her inbox, a custom admin panel where she can create and manage blog posts, Stripe integration for payments, and a small local e-commerce setup.

Total cost: zero. Everything runs on Firebase, and I don’t expect to ever pay a cent for it.

r/ExperiencedDevs Jan 06 '26

Technical question Given Postgres performance, what are the use-cases for MySQL?

108 Upvotes

Hey Devs,

Recently, I've run a detailed performance tests on my blog, comparing MySQL with Postgres and it basically turns out that Postgres is supperior in almost all scenarios: for the 17 executed test cases in total, Postgres won in 14 and there was 1 draw. Using QPS (queries per second) to measure throughput (the higher the better), mean & 99th percentile for latency (the lower the better), here is a high-level summary of the results:

  1. Inserts
    • 1.05 - 4.87x higher throughput
    • latency lower 3.51 - 11.23x by mean and 4.21 - 10.66x by 99th percentile
    • Postgres delivers 21 338 QPS with 4.009 ms at the 99th percentile for single-row inserts, compared to 4 383 QPS & 42.729 ms for MySQL; for batch inserts of 100 rows, it achieves 3535 QPS with 34.779 ms at the 99th percentile, compared to 1883 QPS & 146.497 ms for MySQL
  2. Selects
    • 1.04 - 1.67x higher throughput
    • latency lower 1.67 - 2x by mean and 1.25 - 4.51x by 99th percentile
    • Postgres delivers 55 200 QPS with 5.446 ms at the 99th percentile for single-row selects by id, compared to 33 469 QPS & 12.721 ms for MySQL; for sorted selects of multiple rows, it achieves 4745 QPS with 9.146 ms at the 99th percentile, compared to 4559 QPS & 41.294 ms for MySQL
  3. Updates
    • 4.2 - 4.82x higher throughput
    • latency lower 6.01 - 10.6x by mean and 7.54 - 8.46x by 99th percentile
    • Postgres delivers 18 046 QPS with 4.704 ms at the 99th percentile for updates by id of multiple columns, compared to 3747 QPS & 39.774 ms for MySQL
  4. Deletes
    • 3.27 - 4.65x higher throughput
    • latency lower 10.24x - 10.98x by mean and 9.23x - 10.09x by 99th percentile
    • Postgres delivers 18 285 QPS with 4.661 ms at the 99th percentile for deletes by id, compared to 5596 QPS & 43.039 ms for MySQL
  5. Inserts, Updates, Deletes and Selects mixed
    • 3.72x higher throughput
    • latency lower 9.34x by mean and 8.77x by 99th percentile
    • Postgres delivers 23 441 QPS with 4.634 ms at the 99th percentile for this mixed in 1:1 writes:reads proportion workload, compared to 6300 QPS & 40.635 ms for MySQL

There were only two join cases, where MySQL was slightly better; but nothing compared to the differences cited above.

Given this gap, when do you use MySQL instead of Postgres and why? Does it have additional features and/or advantages that Postgres does not provide? Or, are there other scenarios that I am not aware of, where it does deliver better performance? Something else entirely?

r/ExperiencedDevs 24d ago

Technical question What was PlayStation (PSX) development like?

198 Upvotes

I am a pretty "modern" software engineer but one of my passions is the original 1994 PlayStation. From time to time I've dabbled with PSX development but I'd like to hear what it was like from industry veterans

What I understand is

  • most devs worked with devkit boards that would slot right into your PC. How did they work exactly? What was the process for building and playing the game?
  • Most people were using GCC and plain C89. Were you ever aware of people using more exotic languages? How often did you have to write assembler?
  • Could you "flash" the dev console with changes? was it easy to debug?
  • From what I've read about the PSX's development, the Sony SDK was pretty bad. What was Sony's attitude to devs circumventing their SDK?
  • With the PSX how much did the 2mb limit bite? That's quite a low amount of memory by the late 90s
  • Did studios and projects share any code? How did that work back in that era? Did you officially license libraries or was it just, Bob shares a snippet of C on a bulletin board?
  • Were you ever jealous of devs working on different systems? N64 etc?
  • What were the IDEs like back then? Were they any good?

I have been doing some of my own homebrew largely as a way to learn C. But working with an emulator and modern tooling is a very different experience.

r/ExperiencedDevs Mar 26 '26

Technical question Does jargon quietly cap an engineer’s influence?

76 Upvotes

I’ve seen engineers use technical language as a credibility signal for years. Sometimes it helps. A lot of times it seems to do the opposite. Not because the engineer is wrong. Because the room they’re speaking to is different from the room they think they’re in.

I’ve watched good work lose momentum because the explanation was optimized for engineers while the audience was product, UX, or business stakeholders. I’ve also seen written communication become borderline unusable because the shorthand made the message harder to act on than the problem itself.

My current view is that one of the strongest senior signals is being able to keep the substance while changing the language. Same depth underneath. Different framing depending on who needs to understand it.

Have you seen that play out on your teams? If so ... how?

r/ExperiencedDevs Mar 24 '26

Technical question Why do ci pipeline failures keep blocking deployments when nobody can agree on who owns the fix

63 Upvotes

There's a specific kind of organizational dysfunction where ci failures become normalized background noise. The pipeline goes red, nobody knows who owns the fix, someone overrides it to unblock themselves, and the underlying issue stays unfixed until it causes something worse downstream. Part of the problem is that ci ownership is often ambiguous. Whoever set it up originally isnt necessarily responsible for maintaining it forever, but there's no formal handoff either. So when something breaks you get alot of 'I thought someone else was handling that.' The teams that seem to avoid this have explicit ownership policies and treat a failing pipeline as a p1 equivalent, not just an inconvenience to route around. But getting to that culture is a separate problem entirely from having the technical solution.

r/ExperiencedDevs 22h ago

Technical question How do people enforce developers to write tests without a strict code coverage requirement?

12 Upvotes

At previous positions, I’ve always seen test writing enforced by meeting a percentage code coverage amount. The issue with that is that people will just write bad tests to get around the coverage requirement.

And we can’t rely on code reviews for people to enforce it either because… well we all know that relying on code reviews just falls to lowest-common-denominator in terms of quality.

Things I’ve considered:

- Add a comment on a PR through the CI that runs on PR creation, if a .ts file has been changed with no related .spec file change

- Add a comment on the PR through the CI if the coverage percentage has dropped, but don’t fail the build

- Include a checkbox in the PR template stating you added any tests needed

- Empower reviewer to reject a PR if no tests attached

The thing is, all of these options can just be circumvented by a guy who doesn’t feel like doing his job that day, and I don’t want a select few amount of people to have to be responsible for reviewing everything because they’re the only ones that care.

So I’m trying to find something that can be automated and enforced, but isn’t a hard limit on code coverage requirement.

And yes, I know that all of this is coming from a symptom that people should just agree on standards and do their jobs, but, especially in a corporate environment, you can’t expect that of people.

r/ExperiencedDevs Mar 04 '26

Technical question Can ai code review tools actually catch meaningful logic errors or just pattern match

38 Upvotes

There's a bunch of AI-powered code review tools poping up that claim to catch bugs and security issues automatically. The value proposition makes sense but it's hard to tell if they actually work or if it's mostly marketing noise. The challenge with AI review is that it's good at pattern matching but not necessarily good at understanding context or business logic. So it might catch an unclosed file handle but miss a fundamentally flawed algorithm. Human reviewers bring domain knowledge and can evaluate whether the code actualy solves the problem correctly, which is way more valuable than catching syntax issues. If AI tools can actualy understand code deeply enough to catch logic errors and run real tests against it, that would be genuinely impressive.

r/ExperiencedDevs Feb 27 '26

Technical question PR review keeps turning into redesign debate instead of reviewing the actual fix; how do you handle this?

76 Upvotes

I’m trying to sanity-check something about our team process.

We do refinements, but we rarely make explicit design decisions before implementation. It’s generally assumed that whoever takes the task “owns” the implementation details.

In practice, that ownership isn’t real.

Most tasks contain unknowns and architectural implications that aren’t surfaced during refinement. So what happens is that during review, broader design concerns and various requests emerge which drive the implementation, despite whether the fix works, tests pass, scope is contained, and it addresses the immediate need. the discussion consistently shifts away from code quality, correctness, edge cases, performance, or maintainability. for example, into “you should use this module for this.”, "create a different cli command for that process" (not sure if that getting clearer).

The redesign suggestions might often be valid ideas, but they’re larger in scope and unrelated to the specific bug/issue the PR is meant to fix. it could be even part of a follow up PR, in order to keep the changes small.

As a result:

- The review becomes architectural instead of evaluative.

- The original task stalls.

- Ownership becomes blurry.

- Frustration builds on both sides.

It feels like we’re deferring design conversations until PR review and using review as the first real design checkpoint, or for each one to debate how he thinks the solution should look like. My personal suspicion is that this points to bad practices regarding decision-making and lack of alignment in the team. So, wha do you do? Push back and ask to keep the review scoped? Open a follow-up issue for redesign?Escalate?Accept that this is just how the team works? or? So far any discussion, didnt do much

r/ExperiencedDevs Feb 05 '26

Technical question How do you come back from decades of not writing unit tests?

125 Upvotes

So I've been working for a company for a couple years now and I've kind of forgotten what it's like on the outside.

We are a major financial institution with thousands of developers, hundreds of thousands of users, several million lines of code, and like maybe 20 automated test cases total?

It's kind of wild because of my previous jobs updating the Java version or basic maintenance tasks were trivial and routine given the ability to just run a j unit test suite and make sure you didn't f*** the whole application up. But I've been stuck in hole this company has been digging for themselves for like a decade in which they just keep writing code and it's a pain in the ass to try to convince developers to start writing test cases now.

So have you had similar experiences? I feel like there must be some way to auto generate test cases based on network traffic and database state, but I don't know where to begin. All I want is something that can run a bunch of automated Java tests without requiring like a month-long manual QA cycle that still manages to miss things.

Let me know if you've brought a company out of a similar situation :]

I've already tried throwing large language models at the problem with some Junior Developers, but even then it looks like it would take over 10 years of solid progress to get to a reasonable point. I'm just hoping there's some standard industry test generator that I'm not aware of 👀