Estafette
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2026-01-21T00:48:00+00:00

SE Radio 704: Sriram Panyam on System Design Interviews

Sriram Panyam returns to the show to discuss the system design interview (SDI) with host Robert Blumen. This challenging part of the hiring process is included in the interview loop for many jobs across tech, including management and for all levels from entry to senior. The conversation starts with a look at what the SDI is, who will face it, and how critical this interview is for hiring and leveling. Sriram shares some common system design questions and what the interviewers are generally looking for, including stated versus unstated requirements and ambiguity in the questions. He offers recommendations on how candidates should disambiguate their designs and manage their time. He shares some personal stories of interview failures and successes, and even discusses some mistakes that interviewers make. Brought to you by IEEE Computer Society and IEEE Software magazine.

---

Sriram Panyam returns to the show to discuss the system design interview (SDI) with host Robert Blumen. This challenging part of the hiring process is included in the interview loop for many jobs across tech, including management, and for all levels from entry to senior. The conversation starts with a look at what an SDI is, who will face it, and how critical this interview is for hiring and leveling. Sriram shares some common system design questions and what the interviewers are generally looking for, including stated versus unstated requirements and ambiguity in the questions. He offers recommendations on how candidates should disambiguate their designs and manage their time. Throughout, he shares personal stories of interview failures and successes, and even discusses some mistakes that interviewers make.

Brought to you by IEEE Computer Society and IEEE Software magazine.

---

---

Show Notes

#### Related Links

- panyam (@panyam) on X

- Sriram’s SE Radio host page

- Blog

- LinkedIn

#### Related Episodes

SE Radio 636: Sriram Panyam on SaaS Control Planes

---

#### Transcript

Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Robert Blumen 00:00:19 For Software Engineering Radio, this is Robert Blumen. Today I’m joined by Sriram Panyam. Sriram is a Technical Fellow at GM in product engineering. Prior to that, he has been an engineering manager at Google and LinkedIn. Sriram has also been a guest on SE Radio Episode 636 covering SaaS Control Planes and he has become a host where he has published a number of episodes on a range of topics. Sriram, welcome back to Software Engineering, but you haven’t even been gone.

Sriram Panyam 00:00:53 Robert, thanks for having me here. One thing I can say is I can never stop big notice, but yeah, I look forward to having some fun.

Robert Blumen 00:00:59 Today we’re going to be talking about the System Design Interview. To start, what is the system design interview and what candidates are likely to have this interview?

Sriram Panyam 00:01:11 So a system design interview, it’s a pretty common module in your typical interview loop for technical candidates, right? Specifically in software engineering. And the idea of a system design interview is to really assess and measure a candidate’s experience and the ability to kind of break down complex problems at high scale and design around it, taking into account various tradeoffs, right? And it’s really the more senior you get it is a very important part of the hiring process, right? Can involve software systems, architectures and it’s used for candidates of various types. Obviously, software engineers, engineering managers, architects, DevOps. The focus might change but the overall teams are pretty consistent.

Robert Blumen 00:01:52 Yet on a few points there. I want to break down, you said pretty much everyone’s going to see this interview. Do junior software engineers see it or do you have to reach a certain level of seniority before you would have it?

Sriram Panyam 00:02:06 That’s a really good question. I want to say no one, yes. So typically, with junior engineers, at least until recently, the focus has been on coding, right? The premise being that out of college, out of university you haven’t been exposed to designing large scale systems, right? So, a lot of that would’ve been anecdotal or reading from a book. Whereas coding is a pretty big part of your college curriculum. Now obviously as you see people upleveling in general even and outside work, these rounds are used as a measure of seniority and talent and skill, even at an earlier stage sometimes. Yes, junior engineers may not see it and it won’t be a make or break for them, but unlike before, we have started to kind of see it seep in these days. And we’ll talk about why and when that is.

Robert Blumen 00:02:50 So would people in adjacent roles, data scientists, SREs managers, are they going to see this interview?

Sriram Panyam 00:02:57 So I can’t talk much to data scientists. A lot of data science, at least from my understanding, has been focused around the statistical and the mathematical aspects of design, right? You might find data scientists looking at, well how do we build models, right? Or how do we evaluate models? How do we build systems to kind of do these things in place? The ML engineers and the ML infra folks, they’re much, much more likely to see these in practice because they have to take a model or a system that is given to them or that is designed for them and actually go and build it for production, which is where you’ll start seeing scale being a big factor. How do you scale ML model across a million users when each ML model might take many, many gigabytes of high bandwidth memory to serve? Now you asked about managers, the conventional wisdom says that they wouldn’t be seeing it because managers typically are seen as people leaders and managing people budgets process and so on.

Sriram Panyam 00:03:52 Now what you’ll find in a lot of the technical organizations is managers have to actually be thorough at understanding and evaluating their engineers, both from a cultural as well as technical perspective. They also have to be able to liaise between product and launches, being developed by their teams, which means how scalable systems are, how thorough they are, how maintainable they are, how productionized they are, or these are all performance indicators that will be tied back to a manager for their own team’s performance. So, a manager who might not be aware of what good systems look like will struggle. So, a lot of the companies, what they’re doing is they’re evaluating managers for their ability to handle system design interviews fairly well, almost at a strong IC level. So, if you’re a manager, if you’re a technical manager, definitely expect this and definitely be prepared for this. And I would say rightly so.

Robert Blumen 00:04:44 We’ve been talking around this point, would you like to say anything more about what is the employer looking for in this interview?

Sriram Panyam 00:04:52 Yes. Well, TLDR is experience and having built that or handled that on a scale before. But if you think about the prep folk for it, like rewinding a bit, they also want to see that for other things like coding, people management, if you’re a manager, behavior skills to see how you are as a team player. The interesting thing about system design interviews in some ways like behavioral, there is no one right answer. There is no one right exact way of doing something. There’s no exact one way of building a system. In fact, when you’re thrown a question like design system X, a design Uber or Design X or design Stripe, they want to see if as a candidate you understand what that system actually is meant to do. Because Stripe has what, a thousand engineers working there today? I might be off by a couple of zeros, right?

Sriram Panyam 00:05:38 They have a thousand engineers and Stripe has been built over the last 15 something years. So, they don’t expect you built all of Stripe, they want you to drive what part of Stripe needs to be built? So, they want to see that you as an engineer can craft requirements, can understand them, can clarify them, can push back if needed, like guide the interview in a certain way. Now yes, you won’t build entire system, but can you get towards building those things in a 45-minute, one hour slot? Now again, seniority comes to play here. They want to see if you are a junior engineer who when told go will strive putting up. Or if you are a senior or a staff or a senior staff engineer, will actually take time to clarify what are, not just today’s needs but what the system should be doing tomorrow in six months’ time, in one year’s time, how things are going to change. So, they want to see maturity, they want to see experience, they want to see obviously technical abilities and they want to see your time and expectation management skills too, which sounds surprising, doesn’t it?

Robert Blumen 00:06:30 Well the more I do job interviews in my career, the more I realize the true question that they’re asking is the question they don’t ask . And a big part of the candidate is figuring out what are they really asking me here? Or what are they really looking for?

Sriram Panyam 00:06:46 I know that came across, well actually it didn’t come across clearly. It’s not a very poignant but I’ve seen interviews where literally the only instruction you have is go and design X and you’ll see all kind of personalities on the other side of the interview table. And it’s up to the candidate to see how they want to take it, how they want to deal with that new challenge or the unseen or the surprises there and how they handle it. You have some interviews are very friendly, they’ll give you all the guidance, all the hints, all the requirements and clarification. Some interviews who are going to Xanax, that’s it. And this is where that composure helps, the management helps, the calmness helps.

Robert Blumen 00:07:22 I do want to go more into some of those issues, but let’s come back to that. Still in the preliminaries, I want to go into what is the criticality of this interview for the hiring decision? Set this question up. As an infrastructure engineer, I’m given a coding interview, I don’t think I can get the job because I did a great job in coding. I think I could not get the job if I can’t code. But the hiring decision is going to come from other interviews. You can have a matrix here or pick a couple of examples. What is the criticality of this interview for different roles?

Sriram Panyam 00:08:02 Right. So, it depends on either the role, the seniority and also the companies where they interview you. Typically, your interview loop is what, five to seven rounds depending on the company and I guess market out of which is typically two designs, two coding, one behavioral. These days you’ll even see take home tests and a few other variations. But focusing on what it was typically before the two coding, two designs or three coding, one design plus behavioral that makes changes based on seniority and so on. Now you asked about coding, the importance of coding or if you didn’t do coding well for an infra rule, I do want to clarify. Infra rules can mean different things. Traditionally infra 20 years ago meant network admin or D admin work or admin of some kind of setup. Today infra has been rolled into like it can still mean that, but it can also mean DevOps a team dedicated to setting up your platform as a service layer for you.

Sriram Panyam 00:08:54 Or it can mean a role where you are building the actual database engine for a popular database, or you’re building the KAFKA itself, that becomes your infra. So, depending on the kind of infra role that is coding may be important. For example, if you are designing, let’s say a B+ tree index or your part of the team that’s designing a B+ tree index for a database engine, that is heavy CS. That is heavy computer science fundamentals. So, they may index more on coding, but if you’re looking at DevOps where your role is to set up infrastructure and you need to kind of push back on engineers who might say, well I want to have five different indexes on my certain table, which means my latency will have these kinds of boundaries. So how do I set up my VCPUs and RAM and network and so on.

Sriram Panyam 00:09:33 So that’s where the design maturity kind of comes in. Being able to go, if I know my system performance or system demands are these, on day one I’m going to have 50 million users on day two, I’m going to have a hundred million users and it’s going to grow this way and each user is going to have so much of a time of staying on my side doing these five things. Each of those things can be measured or the impact that that can be measured. So, can you do those kinds of calculations? Can you analyze the system to come up with, hey, we should design it this way. So, at that point your coding may not be as important as opposed to your ability to reason about your system behavior and performance. And that’s where, when and where it is important depends on what kind of role it is, who the, for lack of a better word, the employer or customer is the customer of your skills and what seniority you’re going in for. If you are going to be a TL who’s going to manage other engineers or lead other engineers and guide them, your raw coding will not be as important in that scenario.

Robert Blumen 00:10:23 Now it would be a good time to ask, can you give us an example or two of the system design questions?

Sriram Panyam 00:10:30 So by the way, there are plenty of sites on the web that have given detailed write-ups of various questions and solutions. If you look at your coding examples, right? There are literally thousands of examples or thousands of coding questions that you can be asked upon. If you go to lead code, you’ll find I think 10 years ago lead code had a hundred questions on the site. Today they have over 4,000 questions and it’s growing and nobody can physically do all of them. Ironically for system design, there’s about a handful. It’s like 25 systems that you’ll be asked to design categorized on easy, medium, hard. The easiest one that everyone or any system design interviewee will be aware of is designing Bitly. Bitly is a very popular URL shortening system. If you provide a large URL, I can’t think of large URL off the top of my head, but a real large URL, it gives you a shortened URL like Bit or ly/A124.

Sriram Panyam 00:11:18 These are memorable, they can be easily shared and so on. This is one of the simplest, simplest system design into questions you’ll see if ever, and it has a lot of the elements of thinking about scale and it’s simple enough that you can reason about what the functional character should be like. You should know what a right system here looks like. You should take a big URL, turn it small URL, it takes a small URL and gives you a redirect to the original large URL. But as you scale, as you think of different features like analytics and as you think of reliability guarantees, as you think of latency issues and all that, the system kind of takes life, that’s an example. Then you have things that span other facets like hey, if I do a write on a system as in create something or update a piece of data, it needs to propagate somewhere else.

Sriram Panyam 00:12:00 There’s a whole class of design problems that look at this. A really good example is Twitter or X. I create a post and if you are following me it needs to show up on your timeline. Now imagine if I’m followed by a thousand followers, how quickly can it get to all my thousand followers timelines? I wish, but if I was Taylor Swift and I had a million followers, what then? Now similarly, you can think of something that’s a very write heavy system. Take for example the Uber’s driver matching or driver location tracking feature. I open the Uber app, I see a bunch of drivers on the map and they need to kind of follow the screen in real time. And this is a very, very write-heavy system. But I’m okay for a fair bit of inaccuracy. It doesn’t really matter to me if the driver is 10 yards this way or that way.

Sriram Panyam 00:12:50 So this one’s, this is one’s a bit different. You can be asked about designing Netflix, how would something like Netflix, so three hour or two-hour movies at scale to hundreds of millions of users worldwide. Now you’re talking about caches and CDNs and things that have to scale and live on the edge and so on. So, these are some examples that it can be drilled on. Again, there’s only a handful of these interview questions, but each question can span multiple ways. When you go to the Uber example, when the interviewer says, oh design Uber for me, a junior or a less experienced candidate might start throwing a box on the white board talking about databases and caches and mobile phones and a whole bunch of things without asking, well do you want me to talk about the driver tracking or do you want to talk about the driver selection? Just that clarification can throw you completely off cut on a different path, different set of requirements. And here’s where one of the things comes in, right? When we think of a system like Uber or the tendency to think of Uber ease as one system, there is Uber, but Uber really what again also has a thousand plus engineers who are building dozens and hundreds of microservices. So you might really be only looking at building one of those microservices or one of those services in your interview. So, zooming out, clarifying, going there really helps.

Robert Blumen 00:14:06 All those questions illustrate the enormous range of different dimensions and characteristics that you could address and the need to narrow down what the interviewer’s looking for. So, if you ask them a question of which part of Uber do you want me to design? What are you going to get back?

Sriram Panyam 00:14:27 That’s a great question and here’s where luck plays a role, but I would say it is good to be prepared for it. If you were to ask me or if I was to ask you as interviewer, hey Robert, what should I design? You can be generous and tell me, well design the driver matching feature for me. If you are generous, you might even tell me you have a system where you open the phone, you are on the phone, you’re going to see a map, the map is going to show all these little dots. Each dot represents drivers in your vicinity or some range, I want to track that. On the other extreme, you might have an interviewer who might say, look, we are Uber, we are in year one of launch and we only have so many users. Our goal is to build engagement. What kind of features do you think we might need?

Sriram Panyam 00:15:06 It sounds like a very product management question and it kind of is, but you will if you get this, you get it, you got to roll with it. So, this is why it’s a good thing that you only have 25 or so systems to prepare for that gives you time to understand the market, the strategy, what that system’s about, and get a better grasp of what you might want to build. So, you can actually offer options on what you want to build. It shows two things. One shows your seniority that you worked in a system that went from one to million to 10 million users. It also shows that look, at some point in time you are going to go, well I spoke about driver matching and now I know that when I have to book an order, I have to make sure that in a heavily congested area these drivers don’t end up getting double booked. So able to form those connections across different departments in a product. And as a principal engineer you’ll see that you might not see that as a new engineer. So, if you ever ask a question, go prepared. But sometimes there’s only so much you can do.

Robert Blumen 00:16:04 Sometimes you don’t know what you’re going to get. Change tracks here. There’s a question that you proposed sounds very interesting to me and I don’t know what I’m going to get. What is the history of this system design interview track and how has it evolved over the past decade?

Sriram Panyam 00:16:22 I think the evolution has been over the last two decades, and this is only my observation, and I don’t claim the full history, but what I’ve observed is I think early 2000s just before or around the .com kind of bubble and burst. A lot of the design work or some design work was largely enterprise driven. I started my career in telecommunications and the closest to design that I had done at the time 25 years ago was building data centers. Now, before you think that’s cool, data centers back then were literally one or two racks at a time in a giant building that was a data center and a lot of the expertise needed back then was your Cisco certified X, Y, Z, your Microsoft certified A, B, C, and your vendor certified so and so. And you’ll find that a lot of these interviews were focused around certifications and enterprise specific or vendor specific things.

Sriram Panyam 00:17:20 Now as companies like Google and Amazon and the early era companies took off, they started building a lot of things in-house. In fact, when I spoke to somebody 2020 years ago at Cisco, they claimed that Google was their biggest threat. And I was surprised because Google was a search company and Cisco was a networking giant; how can they be in the same realm? But unbeknownst to me at the time, Google was building their own data centers, their own custom networking switches, their own topologies, protocols, whatnot. So, I mean as these unicorns or these giants were becoming giants, they would start seeing that folks who came with pure vendor skills would not easily switch to like without switch their skills to different domain or different way of doing. So, they started going more and more fundamental in how they would interview candidates. Usually, Google even had those counting number of balls in a bus kind of questions, which is long been outlawed.

Sriram Panyam 00:18:11 But what it gave away to was really thinking about systems, thinking about quantitative methods, thinking about reasoning, kind of the causal reasoning if the load goes up here, what happens over there. And from that they found that regardless of your background, if you can think in a systematic way and think of these basic constructs like queues and queue means queues in a system, things like arrival rates, things like service rates, things like if my note can process so many requests, what happens over there? They found that these candidates would get better thinking at incrementally higher level of granularity. So, design rounds became more and more about the concepts of building ambiguous systems that may not even be real but could stress your thinking on a very vendor agnostic way. So, I can talk about when I failed my first design at Google 2008 and this when I saw one of these questions for the first time and last 10, 15 years, this has been a staple for system design interviews. You might see the depth and scope and expectations gotten higher in the last few years, but this structure has been fairly consistent, I would say 2010 onwards.

Robert Blumen 00:19:18 You said just now when you failed your first system design interview at Google, I’d love to hear that story.

Sriram Panyam 00:19:24 There’s nothing quite like exposing yourself on live radio, is there ? Well, actually, so my first design interview that I failed was in 2008. Sadly, it wasn’t my last. I would continue to fail them for under, I think under five or two years, sadly, without even knowing why I was failing. The way I would fail them, and later I would see that was a pretty common thing. I mean I wasn’t unique there was that I would go into a system interview and somebody would ask me design X and I would suddenly throw things like caches and components and technologies on the board without understanding what the problem was about, what are the requirements, what are the behavioral expectations or the performance expectations of a system. Actually, the first one I was ever asked was, you had to push a hundred terabytes of data into a data center.

Sriram Panyam 00:20:08 How would you do this? Looking back now you break it down and you can do it in a series of steps, but I just came in with, well I would use FTP. That was my vague response, right? And that opened up a whole rabbit hole of, well what about this? What about that? What about that? It became a pretty much a cat mouse game where I was playing catch up. And I would do this for many years without knowing why I would fail these interviews, right? I mean, I would come out of this thinking that hey, I had all the right buzzwords, I had FTP in there, I had pipes in there, I had switches in there, I had everything in there, but I still failed. Around 2016 I was at an Uber interview, which forever changed my life. The question was designing Twitter and same thing I was throwing around things like cues and load balancers and nodes and dynamo DBs and whatnot.

Sriram Panyam 00:20:54 I was struggling and the interviewer I think had a pained look on his face. So, in a very rare act of generosity, he kind of asked one question and the question was, what would be the first request that would fail? And that was towards the end of the interview. And I packed up and I went home. Weirdly enough, when I went home, I ran across this book here, I’m pointing to a book called Database Management Systems by Raghu Ramakrishnan and Johannes Gehrke. It’s a book on databases, more specifically it’s a book on database internals. And what it talks about is how do indexes work in a database? What kind of latencies they give you, what kind of response times they give you? So that question made me look at this book and realize, well what if I had asked how many requests will this to be getting?

Sriram Panyam 00:21:41 If you got to dump it all into database, when would the first failure be? The first failure would be when the database cannot handle a load. When can it not handle a load and then work backwards. Suddenly, I don’t know if you played that game, the incredible machine back in the 90’s? You’d build these Rube Goldberg machines where you light a candle, the candle blows a balloon, the balloon flows up and turns switch on and the switch turns on a fan, which hits a cat some via machine, right? And you kind of think of that and you work backwards and each of those components becomes part of that larger system. So, this interview kind of made me realize, okay, if I can break my problem down into these steps, and I’ll talk about those steps too, right? Like ask for the requirements, look at exactly what are the SLOs.

Sriram Panyam 00:22:20 If I need to put something in Twitter as a user, how much delay am I allowed to tolerate? As a feed consumer, am I okay if I see a post from you taking more than 10 seconds, more than five seconds, more than 20 seconds? Like where would I be happy about? So, putting these numbers in place, like can I help me learn about, okay, what if I design a system that matches these different criteria? So that was kind of the aha moment I guess, on how I started learning about it. I would still fail, but at least now I know why I was failing.

Robert Blumen 00:22:49 Yeah. So, you’ve talked about failures, learning from failure and how you’ve evolved your approach. Is there any key takeaways or lessons learned from that process that you haven’t covered?

Sriram Panyam 00:23:02 I know that Uber failure and recovery seems like a rocky style montage over the weekend, but reality was that aha moment was pure luck. I happened to have that book at home and I found it, but what I did thankfully do was reach out to few of my friends and pitch to them my thoughts are why had gone wrong? At that time my network was pretty small, so I had to kind of figure out on my own. So, the friends who had, I mean, I mean my colleagues who are very, very helpful, they started challenging a few of my assumptions on what I thought these design interviews were like. So, as I was doing more mocks, getting their sense of how I was doing, going from complete confusion on their faces to this growing clarity on how I was forming, right, helped me kind of formulate my own way.

Sriram Panyam 00:23:40 And again, it’s not a patent approach or anything. What helped me at the time was really bring the problem down into A. gathering all requirements, you know, functional requirements, non-functional requirements and even extensions that you might want to consider. And then really going down to talk about if you were to solve those requirements, how do you build the APIs or what would the APIs look like? What would be the entities and objects in your system you would store for serving those requirements? And then get to a point where you can draw on the board a very, very high-level diagram of system that is purely just functional and not scalable. And if that can take care of all your functional requirements in the first five, or 10 minutes, you are like a long way in. Because that sets the stage for A) showing that you thought about system on day one and B) cut down any ambiguity on where the interviewer might want to trick you with.

Sriram Panyam 00:24:26 And now is when you look at the non-functional requirements, things like read should take no more than 10 milliseconds, writes should not take more than two milliseconds, freshness, SLOs and so on. Now is the system that you have in place with one database, in one server example kind of scale? Well obviously it can so how do we do this? Can you, where can you scale? Where can you break the bottlenecks and systematically evolve, grow each branch of that design tree one piece at a time? So, I think this structured thinking or this structured way kind of helped me break the problem down and for me the back of the envelope map was something that I was usually good at. So that helped me kind of turn the problem to my, like into a space that I was strong at. So, if I could take a problem, put the numbers on various bits of expectations on your system, then I can exit each one of those to see what solves it and what trade-offs I got to make.

Robert Blumen 00:25:13 How do you go about communicating the tradeoffs effectively?

Sriram Panyam 00:25:17 So, given my strengths are actually being quantitative, showing that if I know a database for example can do a read in a millisecond, it can do a write in five milliseconds for example, I can use that to show that if I use this technology with these guarantees, I would get these SLOs that I’m meeting. But the cost of doing that might be literally high cost. For instance, you might say that putting everything in memory or in a very, very highly performed database can get you set your SLOs, but if your customers don’t care or if your users don’t care and if they’re very cost sensitive, you might be okay with a lower SLO in part system. So, showing these with numbers is I think the easiest way to do it. It’s also the most intuitive way as well. So, a number speaks for themselves. You can show how one part connects something else and what might be the rule for tradeoff.

Robert Blumen 00:26:02 Now I want to switch tracks considerably. We’ve been talking about the structure of the interviews and what the candidate faces. I want to look at it now from the interviewer’s perspective. In a company’s the interviewer given instructions like do a system design interview or do companies pretty much standardize so everyone gets as close as possible to the same interview?

Sriram Panyam 00:26:26 The answer is yes to both. Typically what happens when a company lacks a strong interviewer bench, they might be putting all their eggs on a single like one or two very senior engineers in the company. If you think of a startup, that person is usually a CTO and that CTO is most likely going to be using their set of problems that they have or they might have in the next three to six months and use that as a basis for throwing problems to the candidate. Which is not a bad thing because you’re testing for practical applications. As companies grow, what happens is there is a bigger emphasis on being fair, not being biased and ensuring that all candidates have a fairly similar interview bar and experience without it being a thing that you can look off Reddit and learn the answer for. So, companies actually invest in interviewer training so that they can remove some of the bias language, they can be more focused, they can actually be more targeted towards the candidate’s experience.

Sriram Panyam 00:27:17 You don’t want a front-end engineer asking a very fairly heavily front-end focused question for a candidate who’s probably a backend engineer or a platform engineer or infra engineer and vice versa. So, companies invest in both training and domain-based interviewing skills and interviewing coaching, right? They also do this for leveling as well, right? They want to make sure that everybody at a certain level has similar bar and being, is being assessed in a similar way so they can remove some of the bias around how this interview is happening. You also have panels that look for how an interview went along with the transcript. By the way, interviewers write down these interviews typically and present back to a committee or panel to review both the candidate performance as well as see if there are any anomalies on how the interviewer kind of might have gone too much into one side. I’ve seen interviewers where interviewers might come in at large companies with a very, very, very focused domain specific question that could throw off a candidate on a completely different domain because the candidate is spending time trying to understand the domain and not focusing on design. And I’ve seen companies give feedback to the interviewer and bring the candidate back in for a second round because the first round was way off course to happen both ways. I mean they’re both good in some sense and they both have their downsides.

Robert Blumen 00:28:26 Interviewers adaptively scale the level of difficulty based on how the candidate’s doing. If they solve the easy problem, then throw some new requirements in or increase the scale by 10. Something to constantly challenge the candidate.

Sriram Panyam 00:28:44 Again it depends on the culture. Before I talk about that, let me talk about the leveling first. So, the difficulty of questions when they start off with are also dependent on the target level of the candidate. A candidate who’s being interviewed for L4 or midlevel engineering roles will be looked more for the breadth. Do they know about the different technologies, for example. Do they know about API gateways, do they know about a cache? Are they using it, and so on. If the candidate is acing that or if let’s say if an L4 or mid-level candidate is acing that, they might start going deeper, right? Well, you mentioned you can use a low balancer. Okay, tell me about how that could perform in this scenario? Now as they get senior, it becomes deeper. They’re starting to look for candidates, you know senior candidates to have more advanced design skills.

Sriram Panyam 00:29:27 Knowing when do you use what kind of source systems, reason about POS benefits, SLOs and so on, and really be able to articulate their own design choices. Here’s where you’ll find candidate shift from, well I read that in a book to I’ve seen that being used, so I can empathize with it. And then as they get you a more senior like staff, senior staff level, you really want people who can, who have built this stuff from ground up and brought in their technical choices and have lived through it and have fixed issues with it and scaled it and so on. So it’s not so much that an interviewer will make it challenging for the sake of it, but often it’s to see how far the candidate can go so they can probe their depth and their breadth and really look at other aspects beyond just ease the candidate answering my questions versus ease the candidate owning and driving the interview process and really treating me as a peer. And obviously at any point in time, both the interviewer and or the candidate may reach the limit, but the interviewer usually has the advantage of having gone through this before and preparing the question. The interviewer knows the question to hand, so they usually have a leg up because well they know the various parts in that tree.

Robert Blumen 00:30:31 Taking into account leveling, fairness, uniformity, the different levels of people you get, what is a reasonable approach to giving candidate hints?

Sriram Panyam 00:30:43 This is also very non-standard. I’ve seen companies and interviewers within companies penalize candidates for even a single hint and I’ve seen companies and interviewers who provide the context around how and why the hint was given. And then there’s how well the candidate takes the hint, takes the feedback and course corrects. I was in interview recently where the question was around, I believe it was around, I think it was my favorite about Uber as well. And there was a point where the candidate was stuck. I don’t want to give too much detail because you might, if that person is listening to this, I don’t want to give it away. So, at this point, I mean the scam was doing really well by the way. So, towards the end I wanted to kind of push in one area and asked question around, can you make some tradeoff that is unconventional in how you would make this better?

Sriram Panyam 00:31:27 And the candidate paused for a few seconds and he wasn’t quite sure where to go. And then I give a simple hint, hey, what if you want to think of that right? And suddenly the candidate just exploded in a good way. He just had that epiphany, and he was able to kind of explain it and roll with it. Is that a bad candidate? No, because that candidate clearly knew about this whole area. He just hadn’t made that connection in that interview pressure. So, I personally will look for things like that and give the candidate the benefit of the doubt, but there is no standard forgiveness procedure there, so be ready for anything.

Robert Blumen 00:32:00 Earlier you talked about learning from failures as a candidate. What about mistakes that interviewers make either that you’ve made or that you’ve seen?

Sriram Panyam 00:32:12 So it’s an interesting question and sadly not devoid of a non-controversial answer. What I mean is a lot of times interviewers do not know that they are not providing a fair interview experience. There are biases and it goes both ways and interviewers might feel that, hey look, I know this area, I’ll make sure that this area is so stripped down so it becomes fair, but it might not be the case. And also given the market and so on, it really depends on what kind of safety and inclusiveness measures the company and the interviewer are operating under for that feedback to A. be discovered by the committee or somebody and then B. for that to get back to the interviewer. And then there is no really set process in how that retraining, re-coaching happens for that to not make it way again. The closest I’ve seen is companies might blacklist certain questions, they might change some of the language that interviewers are supposed to use in the feedback process, but it’s not a clear standard around this is how you should do this and it’s how you should not do this.

Sriram Panyam 00:33:17 It happens but there’s no set hard and fast rule a lot of the times to mitigate this. Companies pair up an interviewer with a shadow so the shadow can also record notes and offer feedback if necessary for where possible. Sometimes the shadow and the interviewer leveling difference is quite high. So, you might not see the feedback getting there. So, one way I would mitigate this is by having shadows who are at the same level and who don’t have a conflicting kind of reporting structure so the feedback can flow and can be taken into account. But really as it is a fairly, I guess cloudy process.

Robert Blumen 00:33:53 Is there any story that you could share where you were the interviewer, you asked a question and either at the time or later you think I could have done that better?

Sriram Panyam 00:34:03 So in my early days of interviewing I had this, and this is both coding and for design interviews, right? I had gone in with a mentor and this is like 15 something years ago. I would go in with the mentality of I’m the interviewer I need to show who the boss is. And again, it’s a really, really embarrassing at best and something to be ashamed of at worst. Your role as an interviewer is not a superiority contest, right? It is to attract the best candidates for your company, right? Or for your team, for your business. Like somebody you want to work with, somebody who will lift your game, which is almost a way of saying they need to be better than you, but you need to be testing them for that but also be humble enough to highlight and accept it. And many interviewers early in their interviewing kind of career, they go in with that, look, I can’t look bad in that setting.

Sriram Panyam 00:34:53 And a lot of great companies kind of help you with that coaching, help you with pairing up you with the right level of people so that like your growth as interviewer is also organic and over time. So, this room for this ego clash doesn’t come in. Now if I was to go back to those initial set of interviewers that I had been very mean and harsh on, I would definitely slap myself in the back of the head and coach myself. But I’m hoping that this question and the podcast does a better job.

Robert Blumen 00:35:19 Yeah, well that’s a great lesson. Now switching back to the candidate, or this could be from either side, you’ve talked a lot about there’s an intentional ambiguity and interviewers are looking for a candidate to add some structure that’s not there as far as what they talk about, how they talk about it. Do you have any guidelines as far as time management primarily be on the candidate side, but it could be on either side, you’ve got 40-minutes?

Sriram Panyam 00:35:50 Well let me talk about that from both sides. From the candidate side. So, first of all the sections as a candidate that I want to kind of cover, I want to cover the requirements, the API and entities, the high-level design and then scalability barriers and then extensions or towards the end some variation of that. And when I practice for interviews and I still have to because it’s something that you have to do, right? And if you don’t do it, you lose it. So, I typically try to cover both the requirements, the APIs and entities in the first five to 10 minutes max. And then high-level design where just things just work without any skill concerns out of five minutes. And that includes just drawing something on the board, right? Oracle Draw or Miro or whatever online tool we use. And then that leaves you what, 5, 10, 15 minutes in a 45-minute interview if I’m math this correct, about 20 minutes for the rest, which is not bad because you can actually talk about, and you find that in all these systems and interviews, you know where the scalability barriers are going to be.

Sriram Panyam 00:36:43 So as you prepare beforehand, that okay, the first thing they’re going to ask me, scale goes up, number of users goes up by 10X, a hundred X, thousand X, what do we do? So, this becomes a very structured learning process, and I would say focus on that. But now on the other side you might be thinking, oh this is easy as an interviewer, if I see a candidate like this, perfect, right? Now in all this time, what do you want to do? Is you want to drive the conversation so that you aren’t just plowing forward but giving interviewer enough room to, you know what I’m going to go with this API because of these assumptions. I could do that if that was the case, which way do you want to go? If you don’t say anything, I’ll go with this. I just want to make sure that you’re okay with it.

Sriram Panyam 00:37:17 So here’s where you’re kind of having that explicit silent contract with the interviewer that your path is kind of, it’s a dual commitment. And then as you do this throughout the process, that gives less opportunity for kind of trick questions. As in any trick questions or backing up has to be explicit and an exception rather than the rule. Now on the interviewer side, you want to kind of given enough time to follow a candidate on their pace. And I usually have the candidate walk through a certain line of reasoning to see where they’re going with it, right? See if I can tie them back to the scalability barriers later on. I usually avoid interrupting unless I see that interview’s completely going off track and, it’s completely off topic. Or add a path where there’s no way they can come back to the scale concerns and scalability concerns later on. So, depending on the interviewer, I mean on your personality you might be somebody who has control and be upfront about, look I want to hear about requirements, I want to hear about how level design, I want to hear about all these. Can you go through it? You can do that. Or if you’re an interviewer who’s a bit freer flowing, you can choose, you can let the candidate drive it but then pull the reins in if you think it’s not going to go.

Robert Blumen 00:38:22 This process you’re talking about where you’re getting a dialogue, a negotiation with the interviewer. And I’m saying if I’m the candidate, you’re saying, okay, I got this and I could go this way or that way. And do you want me to talk about the APIs or security? You’re answering one of those questions that they didn’t ask, but it’s really important, which is are you a reasonable person to work with and that you can negotiate in the workplace?

Sriram Panyam 00:38:47 Yes you are right? They are looking for that, but sometimes they’re not or sometimes they don’t know that they should be looking for it, right? I’ve been on the receiving end of interview where the question was literally designing system. Design Uber. If I have concerns, I will raise my eyebrow, right? And so as a candidate you’re pretty much looking for those eyebrow raises that may be there or you might’ve missed it. So, you do find both sides and some interviewers may not even value for it. So, you got to read the room, you got to see what the interviewer is like, and you get that back from experience. And even now I sometimes tend to misread it. So, it is just a matter of practice.

Robert Blumen 00:39:20 The way I interpret that example is the interviewing process. It’s a two-way street and the company is being interviewed as well and if I go into a workplace where their mode of communication is raised eyebrows, I want a bit more direct communication. So, I probably don’t want to work there.

Sriram Panyam 00:39:38 So it’s not that that’s the mode of communication, it’s probably that particular interviewer, right? I mean I’ve been in many panels where you would find a whole range of interviewer personalities or a single interviewer may have different moods depending on the day, right? So, it’s hard to kind of know what mood that person is in or what, which interview you’re getting that day. Again, I’m not absolving them of any responsibility, but it’s hard to paint an entire company in that light or in that scene because of one odd interview, right? Again, it’s really an end-by-end matrix where all companies have all kind of personas and you are the outsider.

Robert Blumen 00:40:15 We have one question I think would be great to close out. This is also suggested by you and I also like it because I have no idea what I’m going to get. What are some common misconceptions about the system design interview?

Sriram Panyam 00:40:29 Oof. So, two things, two of many. A. that you have to cover the highest scale that there is in a system with zero errors, with zero cost to operations, that there is a perfect solution that covers every angle in a system. Candidates tend to forget that there are tradeoffs. And tradeoffs are actually the highlights of system designs and not things you avoid. The other one is, this may not be a misconception, but it’s a trap, right? Many candidates end up going for complexity. They think that their system needs to have highly scalable, like in a lot of Kafka queues and many, many microservices and you name it like name the flavor of the month and it’s got to be there. And they end up overcomplicating a system when a simple system would do. So, choosing complexity to show skill is one of the traps.

Sriram Panyam 00:41:18 The other one is being focused on technology rather than, systems. In fact, after that Uber interview, every interview of mine, I had never focused on technology. I would focus on concepts like if my database had this kind of index that would help me with this kind of SLL, any vendor you want, that’s fine. In fact, I would only use at the end where I would go look, because I need a thousand hosts. Today’s easy to price, easy to instance prices are, I don’t know, I think it’s $7 for a mini or whatnot, right? This is what it’s going to cost you. So based on that cost, this is your CapEx, what you do. And I would say one other misconception is that people think a system design interview is purely, purely technical. It’s by and large a communication and time and expectations management interview.

Robert Blumen 00:42:05 I think we’ve covered that a few times, but was there anything way you wanted to expand or elaborate on that last point that we haven’t?

Sriram Panyam 00:42:15 Yeah, so if I was to add something new to it, I mean think about this. As I said before, there are 30 problems in the system design interview problem set, right? Like 30 or so problems like this, Uber, Bitly, Dropbox, Ticketmaster, Tinder, like there’s a few standard problems, right? And still people struggle with it. I mean how hot double quotes; how hot can it be to master 30 problems? But if you think about it, people fail or people struggle because they look at it as I need to go there, draw a bunch of boxes and it’s all an aha moment from there. But imagine the last product that was built, it was not built because somebody decided to throw in boxes. It was built because there was a user need that they had to understand there was a business requirement or business need that they had to solve.

Sriram Panyam 00:42:56 And they had constraints, they didn’t have rules of funding on day one. They had to go with three boxes and then incrementally make improvements even if they were bad ones because they were the right ones at that point in time. Like that doesn’t happen in isolation. It happens through communication; it happens through leadership. It happens through explaining and articulating what you’re doing, why you’re doing, and when you can come back to it. So as a senior engineer, a lot of your time goes into articulating all these rather than just writing code or building systems. And I think being good at that articulation and the non-technical aspect helps you do well in these interviews too.

Robert Blumen 00:43:30 Sriram, I think that’s a great place to wrap up, to close out the show. Is there any place you want people to find you on the internet or anything you’re involved with?

Sriram Panyam 00:43:42 I talk about system design problems on my own blog, build image.com. I also did a talk in open-source Summit North America in Denver in July, six months ago where I built a tool for system design, I guess visualization. It was just a fun tool for learning how to reason about systems. You might find it useful, or I would love to collaborate.

Robert Blumen 00:44:03 Sounds good, sir. Sriram, thank you so much for speaking to Software Engineering Radio.

Sriram Panyam 00:44:08 Thank you Robert. It was a lot of fun.

Robert Blumen 00:44:11 For Software Engineering Radio, this has been Robert Blumen. Thank you for listening.

[End of Audio]

---

[Original source](https://se-radio.net/2026/01/se-radio-704-sriram-panyam-on-system-design-interviews/)

Reply