1
0 Comments

Inside the Technical Tradeoffs That Make or Break Advanced Chip Manufacturing

A semiconductor node does not become strategic because a roadmap says so. It becomes strategic when it can be used across real designs, at scale, with predictable yield, power, and cost. That is where many advanced-node programs still struggle: the gap between what looks impressive at the device level and what holds up once routing, parasitics, design rules, and lithographic limits are treated as first-class constraints.

Sandra Shaji is a DTCO engineer at Samsung Semiconductor and a Senior IEEE Member. Her work sits on the “physics-to-design” boundary: translating early process and interconnect assumptions into design-relevant feedback so technology definition decisions reflect how chips are actually built and routed.
At that stage, the work is less about optimising finished designs and more about working at the physical limits of scaling itself. Transistor behaviour, wiring resistance, lithographic printability, and power delivery constraints all begin interacting long before a chip layout exists. Decisions made here determine whether a node remains a theoretical advance or becomes something manufacturers can reliably build on.

We spoke with Shaji, a judge for the Globee Awards for Impact, about what changes at sub-2nm, why BEOL choices have started to dominate outcomes, and what the industry must tighten over the next few years to make local manufacturing ambitions operational rather than aspirational.

From the outside, progress in semiconductors looks linear. From your vantage point, where do advanced nodes most often run into trouble?

At sub-2nm, transistor capability is necessary, but it is no longer sufficient. The limiting factors become interconnect parasitics, routing constraints, and the realism of the rules that govern printability. If the wiring stack and the rules do not support dense routing without exploding resistance and capacitance, you can end up with a node that looks strong in isolated metrics but weak in full-chip behaviour.

This is also why the industry’s near-term priorities increasingly emphasise manufacturable scaling, not only device innovation. Deloitte’s 2025 outlook, for example, projects global semiconductor sales of about $697 billion in 2025, with momentum tied to AI and automotive electronics. That demand is not satisfied by “lab wins.” It depends on platforms that are usable across designs and product categories.

From my perspective, the core question is simple: can the node produce libraries and rules that let design teams close timing and power within realistic congestion and routing limits? If that answer is unclear early, it becomes expensive later.

Your work is more applied DTCO than theory. What does a technology definition look like in practice?

Technology definition becomes practical when it produces design outcomes you can measure early: routability, cell-level power and delay under extraction, and block-level PPA trends that stay consistent across multiple architectural choices.

I used DTCO infrastructure to explore standard-cell architecture knobs and then simulated design place-and-route to generate feedback for technology definition and process teams. The point is not to create a perfect prediction. The point is to create a decision-grade comparison: if we change cell height, pin access strategy, or local routing assumptions, what happens to congestion, wirelength, and switching power once extraction is applied.

In advanced nodes, it is easy to optimise for one dimension while quietly harming another. A density-oriented choice might raise routing demand. A routing-friendly choice might inflate the area or degrade performance. DTCO, applied this way, turns those tradeoffs into something that can be discussed with evidence rather than intuition.

Wiring and metal layers rarely make headlines, yet you describe them as decisive. What makes these choices so critical at advanced nodes?

BEOL is where abstract scaling meets physical cost and manufacturability. As pitches tighten, resistance rises, and capacitance penalties become harder to ignore. At the same time, the foundry has to decide what is feasible to build at high volume and what becomes too complex or too costly.

Using DTCO runs and a block-level PPA estimation methodology I developed, I simulated more than 15 lower metal pitch and spacing combinations to identify where the balance landed: power and performance targets that still respected area goals while staying realistic for process cost.

The intent is not to chase the smallest pitch possible. The intent is to find the configuration that behaves well across realistic routing patterns. Some BEOL options can look attractive in narrow benchmarks and then struggle under dense logic routing because wire resistance and coupling effects stack up. Other options can reduce parasitics but create routing-resource constraints that increase detours and wirelength, pushing power back up.

This ties directly to the capital being deployed across the supply chain. SEMI forecasts global semiconductor equipment sales of $125.5 billion in 2025, with test equipment projected to hit $9.3 billion. That spending only becomes national capability if the process choices it enables translate into repeatable design closure, not one-off wins

Design rules often sound administrative to outsiders. Why are they strategic at these nodes?

At advanced nodes, design rules encode manufacturing reality. If the rules underrepresent lithographic limits, design teams get optimistic routing capacity that fails in practice. If the rules overconstrain everything, you leave performance on the table and force area inflation.

I was involved in design rule optimisation aimed at representing lithographic limitations more accurately while also finding opportunities to increase routing wire resources. The practical version of this work is translating what can be printed reliably into what a router can depend on.That affects everything downstream: pin access, track assignment, via density, and congestion hot spots that decide whether timing closure is routine or fragile.

A gap the industry still struggles with is consistency: the rules must reflect printability, but they also must behave predictably under real routing behaviour. That is where early co-optimisation matters because it allows you to stress rules using realistic design flows rather than treating them as static constraints.

As new approaches move from research into industry, what tells you whether an idea is ready to survive real manufacturing constraints?

By forcing comparisons to survive contact with end-to-end constraints. I am less focused on a single strong metric than on whether a choice holds up when extraction, routing, and block-level behaviour are considered together.
One habit that helps is looking at how other technical programs define impact when they evaluate work. Strong work is clear about the tradeoff that it is making and the constraint that it is honouring. That translates well to DTCO. The best technical decisions are explicit about what they optimise, what they accept, and what risks they are reducing.

I also stay close to the research-to-practice pipeline. Co-authoring the IEEE-sponsored paper “ART-3D: Analytical 3D Placement with Reinforced Parameter Tuning for Monolithic 3D ICs” sharpened my intuition for how placement, routing, and constraints interact when density increases. Even when the context is not 3D, the lesson carries: architecture decisions cannot be evaluated in isolation.

Over the next couple of years, what gaps does the industry need to close to make advanced nodes truly scalable and locally dependable?

Over the next couple of years, the biggest gap the industry must close is alignment between technology ambition and design-closure reality. Local manufacturing goals are only as strong as the predictability of the nodes being brought into production. That predictability comes from early work that treats routability and extraction as first-order constraints, not late-stage surprises.

Second, BEOL and design rules have to be treated as product-defining, not secondary. As demand rises for AI infrastructure and electrified systems, the winners will be nodes where wiring choices, routing resources, and cost envelopes are coherent as a system. This is where process decisions stop being abstract and start determining whether a node can scale reliably.

Third, the feedback loop has to tighten. McKinsey has noted that semiconductor companies plan to invest roughly $1 trillion in new fabs through 2030, while also warning that scaling barriers remain unresolved. The next few years are the window where process assumptions, design enablement, and manufacturability must move in lockstep so that capital investment translates into dependable capacity. That discipline is increasingly visible in teams that have proven they can bridge design and manufacturing execution in practice, not just on paper—as reflected in recognitions like the 2025 Best Synergy Award at Samsung.

The way forward is not a single breakthrough. It is rigor: early comparisons grounded in real routing, real parasitics, and rules that encode manufacturing truth. When those pieces align, advanced nodes stop being speculative and start being usable.

on January 5, 2026
Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 150 comments A simple way to keep AI automations from making bad decisions User Avatar 59 comments “This contract looked normal - but could cost millions” User Avatar 54 comments Never hire an SEO Agency for your Saas Startup User Avatar 44 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 41 comments