Academy of System Design
  • About
  • Coding
  • Kernel Internals
  • Debug
  • Academy
  • Electronics
  • Machine Learning/AI

Academy Video Sample

JOIN ACADEMY TODAY

Popular Posts

  • 1

    The Silicon-Software Contract (Hardware-Software Co-Design)

    March 29, 2026
  • 2

    The Geometry of Generalization: Understanding Why Neural Networks Work

    March 30, 2026
  • 3

    Security Architecture — TrustZone, Enclaves, and the Hardware Root of Trust

    March 29, 2026
@2021 - All Right Reserved. Designed and Developed by PenciDesign

The Geometry of Generalization: Understanding Why Neural Networks Work

March 30, 2026 0 comments

The Process of “Life” — Task Scheduling and the CFS

March 30, 2026 0 comments

Inline Functions, Macros, and the Preprocessor Pitfalls

March 30, 2026 0 comments

RAII and Smart Pointers — Managing Resources without a Garbage Collector

March 30, 2026 0 comments

The Linker Script – The Invisible Blueprint of Your System

March 30, 2026 0 comments

Bit Manipulation, Bit-Fields, and the Endianness Trap

March 30, 2026 0 comments

The “Cost” of C++ – Virtual Functions, Vtables, and Memory

March 30, 2026 0 comments

The “Forbidden” Zone — Interrupt Service Routines (ISRs)

March 30, 2026 0 comments

The const Qualifier, constexpr, and the Symbol Table

March 30, 2026 0 comments

Pointer Arithmetic, Type Punning, and the Alignment Trap

March 30, 2026 0 comments
Machine Learning/AI

The Geometry of Generalization: Understanding Why Neural Networks Work

by dnaadmin March 30, 2026
written by dnaadmin

For decades, the conventional wisdom in statistical learning theory was clear: a model with more parameters than training samples will overfit. Complexity must be penalized. Regularization is mandatory. Then deep learning arrived and broke every rule in the textbook — networks with millions of parameters trained to zero training loss somehow generalize beautifully to unseen data. This article explores why.


The Classical View and Why It Broke Down

Traditional learning theory frames generalization through the lens of VC dimension and Rademacher complexity. The core idea is that a hypothesis class with high capacity can shatter arbitrary labels, so its generalization gap — the difference between training and test error — grows with model complexity. The prescription is simple: keep your model small, or explicitly regularize.

This framework worked well for SVMs, decision trees, and shallow neural networks. Then practitioners started training ResNets, Transformers, and GPT-class models — all wildly overparameterized — and observed that these models, despite fitting training data perfectly, generalized to held-out data with remarkable accuracy. Classical theory predicted catastrophe. Reality delivered state-of-the-art performance.

The phenomenon was formalized by Belkin et al. (2019) through the concept of the double descent curve. As model complexity increases, test error first follows the classical U-shaped bias-variance curve, peaks at the interpolation threshold (where the model just barely fits training data), and then — counterintuitively — decreases as the model becomes increasingly overparameterized. This is not a quirk. It is a fundamental property of modern ML systems.


Implicit Regularization: The Inductive Bias of Gradient Descent

If overparameterized models aren’t explicitly regularized, something else must be constraining their solutions. The answer lies in the optimization algorithm itself.

Consider a linear regression problem where the number of parameters p far exceeds the number of samples n. There are infinitely many parameter vectors θ that achieve zero training loss. Gradient descent initialized at the origin doesn’t just find any solution — it finds the minimum L2-norm solution. This is provable: gradient descent on squared loss with zero initialization traces a path through parameter space that stays in the row space of the data matrix, which corresponds exactly to the minimum-norm interpolant.

This is implicit regularization. The algorithm’s geometry imposes structure on the solution even without an explicit penalty term.

For neural networks, the story is richer and less fully understood, but the evidence is compelling:

  • Stochastic Gradient Descent (SGD) with small batch sizes finds flatter minima than full-batch gradient descent. Flatter minima — measured by the sharpness of the loss landscape around the solution — correlate empirically with better generalization. This is the sharpness-aware minimization intuition, formalized by Foret et al. (2021).
  • Early stopping acts as implicit L2 regularization in function space. The number of optimization steps effectively controls the complexity of the learned function.
  • The Edge of Stability phenomenon (Cohen et al., 2021) shows that during training, the largest eigenvalue of the Hessian (the sharpness) hovers just above the stability threshold of gradient descent. The optimizer is perpetually on the edge of instability, which may itself contribute to finding generalization-friendly solutions.

The Neural Tangent Kernel: Linearizing the Deep Net

One of the most elegant theoretical tools to emerge in recent years is the Neural Tangent Kernel (NTK), introduced by Jacot, Gabriel, and Hongler (2018).

Consider a neural network f(x; θ) with parameters θ ∈ ℝᵖ. The NTK is defined as:

K(x, x') = ⟨∇_θ f(x; θ), ∇_θ f(x'; θ)⟩

In the infinite-width limit, a remarkable thing happens: as the network width → ∞, the NTK converges to a deterministic, fixed kernel at initialization and remains constant throughout training. This means training an infinitely wide neural network is equivalent to kernel regression with the NTK — a convex problem with a closed-form solution.

This gives us a tractable theoretical lens: the generalization of infinitely wide networks is governed by the smoothness of the target function in the RKHS (Reproducing Kernel Hilbert Space) induced by the NTK. Functions that are smooth with respect to this kernel are learned efficiently; irregular functions are not.

The practical implication is subtle but important. The NTK regime tells us that networks learn a specific kind of interpolant — one that is smooth in the geometry defined by the network architecture and initialization. This is the “implicit prior” that drives generalization.

Of course, finite-width networks and those trained with large learning rates move out of the NTK regime, entering what is sometimes called the feature learning regime, where the network’s internal representations evolve meaningfully during training. Much of the ongoing theoretical work in deep learning is about understanding this transition.


Benign Overfitting: When Interpolation Doesn’t Hurt

A particularly striking theoretical result comes from Bartlett, Long, Lugosi, and Tsigler (2020): benign overfitting. They prove, for linear regression in high dimensions, that a model can interpolate noisy training data — fitting the noise perfectly — and still achieve near-optimal test error.

The intuition is geometric. In high-dimensional spaces, the model decomposes the parameter space into a “signal” subspace and a “noise” subspace. The minimum-norm interpolant fits the signal with low bias. The noise gets absorbed into the high-dimensional complement, where it is diluted across many directions, contributing negligible variance to predictions on new points.

For this to work, two conditions must hold:

  1. High effective rank of the data covariance — the input distribution must spread mass across many directions.
  2. Sufficient overparameterization — the model needs enough capacity to separate signal from noise geometrically.

These conditions are frequently met in practice: natural language tokens live in high-dimensional embedding spaces, and image patches exhibit approximately low-rank structure. This is part of why deep learning works so well on real-world data distributions despite the apparent paradox.


Practical Takeaways for Practitioners

Theory and practice don’t always travel together, but these insights offer concrete guidance:

1. Don’t underparameterize. If your model sits near the interpolation threshold, you’re in the worst of both worlds — high variance without the benign-overfitting benefits of true overparameterization. When in doubt, scale up before adding explicit regularization.

2. Learning rate is a regularizer. Larger learning rates in SGD tend to find flatter minima with better generalization. Don’t rush to lower it at the first sign of instability.

3. Architecture encodes a prior. The NTK perspective means your choice of architecture isn’t just about expressivity — it defines what kinds of functions are “natural” to your model. Convolutional architectures impose translation equivariance; Transformers impose a permutation-equivariant attention prior. Choose architecture to match your problem’s symmetries.

4. Batch size interacts with generalization. Large-batch training tends to find sharper minima (Keskar et al., 2017). If you’re scaling batch size for throughput, compensate with learning rate warmup and potentially explicit sharpness penalties.

5. The loss landscape matters more than the loss value. A model at training loss = 0.001 in a sharp basin may generalize worse than one at 0.01 in a flat basin. Tools like SAM (Sharpness-Aware Minimization) explicitly optimize for this.


Where the Field Is Heading

The theoretical frontier is moving fast. Some active directions worth watching:

  • Mechanistic interpretability: Rather than studying generalization at the macro level, researchers are reverse-engineering the specific algorithms neural networks implement internally — induction heads in Transformers, modular arithmetic circuits in MLP layers, etc.
  • Grokking: Shah et al. and others have documented cases where networks first memorize training data and then, after further training, “grok” the underlying structure — suddenly generalizing from near-zero to near-perfect test accuracy. This suggests generalization happens through phase transitions, not gradual improvement.
  • Information-theoretic bounds: PAC-Bayes theory, tightened with data-dependent priors, is currently producing the tightest non-vacuous generalization bounds for practical deep networks — a significant improvement over classical VC-based bounds.

Conclusion

The question “why do neural networks generalize?” remains one of the deepest open problems in machine learning. But the last five years have delivered genuine progress: we understand that gradient descent implicitly regularizes, that overparameterization can be beneficial in high dimensions, that the geometry of the loss landscape matters as much as its value, and that network architecture encodes strong inductive priors.

Classical statistical learning theory gave us the right questions. Modern deep learning theory is slowly, rigorously providing the answers — and those answers are turning out to be far more interesting than anyone expected.


Further reading: Belkin et al., “Reconciling Modern Machine Learning Practice and the Classical Bias-Variance Tradeoff” (PNAS 2019); Jacot et al., “Neural Tangent Kernel” (NeurIPS 2018); Bartlett et al., “Benign Overfitting in Linear Regression” (PNAS 2020); Cohen et al., “Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability” (ICLR 2022).


 

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Kernel Internals

The Process of “Life” — Task Scheduling and the CFS

by dnaadmin March 30, 2026
written by dnaadmin

For a System Architect, a “Process” or “Thread” isn’t just a block of code; it is a consumer of CPU cycles, cache lines, and power. In Linux, the entity that decides who gets to consume these resources is the Completely Fair Scheduler (CFS).

Unlike simple RTOS schedulers that use fixed priorities, the Linux kernel must balance the needs of thousands of tasks—ranging from high-throughput background syncs to latency-sensitive user inputs.


1. The Core Philosophy: “Fairness”

The CFS doesn’t use traditional “time slices” (e.g., 10ms for everyone). Instead, it aims to provide each task with a “fair” proportion of the CPU’s processing power.

If you have $N$ runnable tasks, each one should ideally get $1/N$ of the CPU’s time. The scheduler tracks how much time each task has actually spent on the CPU using a variable called vruntime (virtual runtime).

  • The Rule: The task with the smallest vruntime is always the next one to be scheduled.

  • The Weight: “Nice” values (priority) act as a multiplier. A high-priority task’s vruntime increases more slowly than a low-priority task’s, allowing it to stay on the CPU longer before it is no longer the “smallest.”


2. The Data Structure: The Red-Black Tree

To make the decision of “who is next” nearly instantaneous, the CFS stores all runnable tasks in a Red-Black Tree (a self-balancing binary search tree).

  • Left-most Leaf: The task with the lowest vruntime is always sitting at the far left of the tree.

  • Complexity: Finding the next task is $O(1)$ (it’s just a pointer to the left-most node), and re-inserting a task after it has run is $O(\log N)$. For a system with hundreds of tasks, this is incredibly efficient.


3. SMP and Load Balancing

In a modern multi-core SoC (ARM Neoverse, x86 Xeon), the kernel doesn’t just manage one tree; it manages a Runqueue (and a tree) for every single CPU core.

This leads to the Architect’s biggest challenge: Load Balancing.

  • Push/Pull Migration: If CPU 0 is swamped and CPU 1 is idle, the kernel will “pull” tasks from CPU 0 to CPU 1.

  • The Cost: Moving a task between cores is expensive. You lose “Cache Warmth” (the L1/L2 caches on the new core don’t have the task’s data), leading to a performance dip.


4. Architect’s Tool: CPU Affinity and Isolation

When designing a platform for a high-performance Data Center or a Real-Time Edge device, you often don’t want “fairness.” You want determinism.

  • sched_setaffinity: This system call allows you to “pin” a critical thread to a specific core. This ensures the thread never suffers from the cache-miss penalty of migration.

  • Isolcpus: You can tell the Linux kernel at boot to completely ignore certain cores (e.g., isolcpus=2,3). The CFS will never schedule general tasks there, leaving them entirely open for your high-priority, bare-metal-like firmware threads.


5. Summary Table: CFS vs. RTOS Scheduling

Feature Linux CFS Typical RTOS (FreeRTOS/QNX)
Logic Virtual Runtime (Fairness) Fixed Priority (Preemptive)
Data Structure Red-Black Tree Linked List / Bitmap
Primary Goal Maximize Throughput/Fairness Minimize Latency/Jitter
Multi-core Complex Load Balancing Often simple Affinity

Architect’s Interview Tip

If asked how Linux handles real-time tasks, mention the SCHED_FIFO and SCHED_RR policies. These bypass the CFS “fairness” logic and operate on a strict priority basis. Mentioning that these policies can “starve” the rest of the OS if not managed carefully shows you understand the risks of overriding the kernel’s default behavior.


In the next article, we dive into the foundation of kernel stability: Virtual Memory and the Buddy Allocator.

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Coding

Inline Functions, Macros, and the Preprocessor Pitfalls

by dnaadmin March 30, 2026
written by dnaadmin

In the final article of our series, we address the “old world” vs. the “new world” of code optimization. In a System Architect interview, you aren’t just asked how to write a macro; you are asked why a macro might be a “silent killer” in a safety-critical system and how modern C++ features like inline and constexpr provide a safer alternative.


1. The Preprocessor: Simple Text Substitution

Macros (#define) operate before the compiler even sees the code. They are simple text-replacement engines.

  • The Benefit: Macros have zero overhead because they don’t involve a function call. They can also do “Stringification” (#) and “Concatenation” (##), which are useful for generating repetitive boilerplate code (e.g., mapping 256 interrupt vectors).

  • The Pitfall (Side Effects):

    C

    #define SQUARE(x) ((x) * (x))
    int a = 5;
    int b = SQUARE(++a); // Danger! a is incremented twice.
    

    In this case, b becomes 42 instead of 36, and a becomes 7. This is a classic interview “gotcha.”


2. inline Functions: The Compiler’s Hint

An inline function is a request to the compiler to replace the function call with the actual code of the function.

  • Type Safety: Unlike macros, inline functions are type-checked by the compiler.

  • Debuggability: Modern debuggers can often “step into” an inline function, whereas a macro is invisible to the debugger.

  • The Catch: inline is only a hint. The compiler can choose to ignore it if the function is too complex (e.g., contains a loop or recursion) or if inlining would cause massive “Code Bloat.”


3. The “Architect’s Pattern”: do { ... } while(0)

If you must use a macro for multiple statements, you should always wrap it in a do-while block.

The Wrong Way:

C

#define INIT_HW() gpio_init(); i2c_init();
if (condition) INIT_HW(); // i2c_init() runs even if condition is false!

The Right Way:

C

#define INIT_HW() do { \
    gpio_init();       \
    i2c_init();        \
} while(0)

This ensures the macro behaves like a single statement and is compatible with if-else blocks.


4. constexpr: The Ultimate Inline

As an architect, you should favor constexpr (C++11) over macros for constants and simple math.

  • Why? A constexpr function is guaranteed to be evaluated at compile-time if the inputs are known. It generates zero machine instructions in the final binary—it simply injects the result. This is perfect for calculating baud rate dividers or fixed-point scaling factors.


5. Summary Table: Macro vs. Inline vs. Constexpr

Feature #define Macro inline Function constexpr Function
Type Checking No Yes Yes
Scope Respect No (Global) Yes Yes
Evaluation Preprocessor Runtime (Usually) Compile-time
Side Effects High Risk Safe Safe
Best Use Case Code generation, legacy constants. Small, high-frequency utility functions. Math constants, lookup tables, bit-masking.

Architect’s Interview Tip

When wrapping up an interview, mention “Code Bloat.” An architect must always be wary that excessive inlining (especially of large functions) can increase the size of the .text segment so much that it no longer fits in the Instruction Cache (I-Cache). This can lead to a “Cache Thrashing” effect where the system actually runs slower than it would with standard function calls.


Series Conclusion

Congratulations! You’ve navigated through the 10 most critical areas of C/C++ Systems Engineering. From the Volatile contract with hardware to the nuances of Linker Scripts and Smart Pointers, you now have the “Architect’s perspective” needed to lead high-level technical discussions and interviews.

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Coding

RAII and Smart Pointers — Managing Resources without a Garbage Collector

by dnaadmin March 30, 2026
written by dnaadmin

In a “Bare Metal” or System-level C++ interview, the interviewer will often test your knowledge of Resource Management. In C, you have malloc and free, which are prone to leaks and “use-after-free” bugs. In modern C++ (C++11 and beyond), we use RAII (Resource Acquisition Is Initialization) and Smart Pointers to achieve the safety of a Garbage Collector without the non-deterministic performance “hiccups” of a GC.


1. The Core Philosophy: RAII

RAII is the most important concept in C++ systems programming. It ties the lifetime of a resource (memory, a mutex, or a file handle) to the lifetime of a stack object.

  • Constructor: Acquires the resource (e.g., locks a mutex).

  • Destructor: Releases the resource (e.g., unlocks the mutex).

The Magic: Because the C++ standard guarantees that a stack object’s destructor is called when it goes out of scope (even if an error or exception occurs), the resource is guaranteed to be released.


2. Smart Pointers: unique_ptr vs. shared_ptr

Modern C++ provides three main smart pointers in the <memory> header.

A. std::unique_ptr (The Zero-Overhead Workhorse)

This is the default choice for embedded systems. It represents exclusive ownership.

  • Overhead: Exactly zero. It is just a wrapper around a raw pointer.

  • Copying: Forbidden. You cannot copy a unique_ptr, but you can move it using std::move().

  • Use Case: Managing a hardware buffer or a driver instance that should only have one owner.

B. std::shared_ptr (Reference Counted)

This represents shared ownership. Multiple pointers can point to the same object.

  • Overhead: High. It maintains a “Control Block” in the heap with a reference count.

  • Thread Safety: The reference count increment/decrement is atomic, but the object itself is not.

  • Use Case: Complex data structures where multiple components need to keep an object alive.

C. std::weak_ptr (The Observer)

A weak_ptr points to an object managed by a shared_ptr but doesn’t increase the reference count.

  • Use Case: Breaking “Circular Dependencies” (where Object A points to B, and B points to A), which would otherwise cause a permanent memory leak.


3. The “Architect’s Question”: Custom Deleters

Q: How do you use a unique_ptr to manage a memory-mapped hardware register or a third-party C-library resource?

A: You use a Custom Deleter. Instead of calling delete, the unique_ptr can be configured to call a specific function (like pci_free_buffer or fclose) when it goes out of scope.

C++

// Example: unique_ptr managing a file handle
std::unique_ptr<FILE, decltype(&fclose)> myFile(fopen("log.txt", "w"), &fclose);

4. Why not just use a Garbage Collector?

In a semiconductor or real-time environment, a Garbage Collector is often unacceptable because:

  1. Pause Times: A GC can “stop the world” at any moment to reclaim memory, killing real-time determinism.

  2. Memory Overhead: GCs usually require significantly more RAM to operate efficiently.

  3. Resource Non-determinism: A GC only reclaims memory. RAII reclaims any resource (like closing a PCIe window or releasing a hardware spinlock) immediately and predictably.


5. Summary Table: Smart Pointer Characteristics

Feature std::unique_ptr std::shared_ptr
Ownership Exclusive Shared
Size Same as raw pointer 2x raw pointer (plus heap block)
Performance Fast (Zero overhead) Slower (Atomic ref-counting)
Move/Copy Move only Move and Copy
Header <memory> <memory>

Architect’s Interview Tip

When discussing smart pointers, mention Ownership Semantics. Explain that using unique_ptr in a function signature clearly documents whether a function is “borrowing” a resource or “taking ownership” of it. This is a form of “Self-Documenting Code” that reduces bugs in large team environments.


In our final article of this series, we tackle the “Old vs. New” debate: Inline Functions, Macros, and the Pitfalls of Preprocessor Logic.

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Coding

The Linker Script – The Invisible Blueprint of Your System

by dnaadmin March 30, 2026
written by dnaadmin

In a standard software interview, “linking” is something the IDE does automatically. In a System Architect interview, the Linker Script (.ld) is a primary topic. It is the bridge between your C/C++ code and the physical silicon. If the compiler creates the “bricks” (object files), the linker script is the “blueprint” that decides where those bricks sit in the memory map.


1. The Anatomy of a Linker Script

A linker script defines two critical things: MEMORY and SECTIONS.

  • MEMORY: This block describes the physical hardware. It tells the linker exactly where the Flash (ROM) and RAM start and how large they are.

  • SECTIONS: This block tells the linker which code/data segments (from Article 2) should go into which physical memory.

Code snippet

MEMORY
{
    FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 512K
    RAM   (xrw): ORIGIN = 0x20000000, LENGTH = 128K
}

SECTIONS
{
    .text : { *(.text) } > FLASH
    .data : { *(.data) } > RAM AT > FLASH
    .bss  : { *(.bss)  } > RAM
}

2. The LMA vs. VMA Distinction

This is the “Architect-level” question that catches many off guard: “What is the difference between Load Memory Address (LMA) and Virtual Memory Address (VMA)?”

  • LMA (Load Address): Where the data lives when the power is off (usually in Flash).

  • VMA (Virtual/Run Address): Where the data lives when the program is running (usually in RAM).

The Scenario: Initialized globals (.data) must be stored in Flash so they aren’t lost when the power is cut. But the CPU needs to write to them, so they must be moved to RAM at boot. The Linker Script handles this using the AT > FLASH syntax shown above.


3. Linker Symbols: Communication with C Code

How does your C code know where the RAM starts or where the .bss ends so it can zero it out? The Linker Script exports Symbols.

Code snippet

_bss_start = .;
.bss : { *(.bss) }
_bss_end = .;

In your C code, you can access these:

C

extern uint32_t _bss_start, _bss_end;
// Use these addresses to memset the BSS to zero at boot

4. Custom Sections: Mapping to Hardware

Sometimes you need a variable to live at a very specific hardware address (e.g., a mailbox for Inter-Processor Communication or a configuration block at the end of Flash).

  • Step 1 (C Code): __attribute__((section(".config_data"))) uint32_t my_config = 0x1234;

  • Step 2 (Linker Script): Create a section entry for .config_data and map it to a specific memory region.


5. Summary Table: Common Linker Terminology

Term Meaning
ENTRY() Defines the first instruction to run (the Reset Vector).
KEEP() Tells the linker NOT to delete a section even if it seems unused (essential for Vector Tables).
. (Dot Operator) The “Location Counter”—the current address the linker is working on.
Alignment Ensuring sections start on 4-byte or 8-byte boundaries (using ALIGN(4)).

Architect’s Interview Tip

When discussing linker scripts, mention Memory Overlays or Tightly Coupled Memory (TCM). Explain how you use the linker script to place performance-critical code (like a fast FFT or an ISR) into internal SRAM (TCM) while keeping the rest of the OS in slower external Flash. This shows you are optimizing for the Memory Wall we discussed in the System Design series.


In the next article, we return to modern C++: RAII and Smart Pointers — Managing Resources without a Garbage Collector.

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Coding

Bit Manipulation, Bit-Fields, and the Endianness Trap

by dnaadmin March 30, 2026
written by dnaadmin

In the semiconductor industry, “The Bit” is the fundamental unit of communication. Whether you are configuring a PLL (Phase-Locked Loop), checking a FIFO status, or parsing a PCIe TLP header, you are working at the bit level. An interview for a System Architect will inevitably test your ability to manipulate these bits safely and portably.


1. The Essential Bitwise Toolkit

Every systems engineer must be able to write these three operations in their sleep:

  • Setting a Bit: REG |= (1U << n);

  • Clearing a Bit: REG &= ~(1U << n);

  • Toggling a Bit: REG ^= (1U << n);

  • Checking a Bit: if (REG & (1U << n))

The Architect’s Detail: Notice the use of 1U. In C, 1 is a signed int. Shifting a signed 1 into the sign bit (bit 31) is Undefined Behavior. Always use unsigned literals (1U, 1UL) when performing bitwise math.


2. The Bit-Field Controversy

C and C++ allow you to define struct members with specific bit widths:

C

struct ControlRegister {
    uint32_t enable : 1;
    uint32_t mode   : 3;
    uint32_t rsvd   : 28;
};

The Interview Question: “Why do many safety-critical coding standards (like MISRA) discourage the use of bit-fields for hardware registers?”

The Answer: Portability and Ordering. The C standard does not define the order of bits within a byte. One compiler might start from the Least Significant Bit (LSB), and another from the Most Significant Bit (MSB). Furthermore, the compiler may add padding between fields to meet alignment requirements.

  • Architect’s Solution: For hardware registers, use explicit Bit Masks and Shifts. It’s more verbose, but it is 100% portable across different compilers and architectures.


3. The Endianness Trap

Endianness refers to the order in which bytes are stored in memory.

  • Little-Endian (x86, ARM): The least significant byte is stored at the lowest address.

  • Big-Endian (PowerPC, Network Protocols): The most significant byte is stored at the lowest address.

The Scenario: You receive a 32-bit value 0x12345678 over a network (Big-Endian) and store it in a local buffer.

C

uint8_t buffer[4] = {0x12, 0x34, 0x56, 0x78};
uint32_t val = *(uint32_t*)buffer; 

On a Little-Endian CPU, val will become 0x78563412.

The Fix: Always use macros like ntohl() (Network to Host Long) or built-in compiler intrinsics like __builtin_bswap32() to ensure your data matches the CPU’s native format.


4. Architect’s “Pro” Question: Bit-Banging vs. Hardware IPs

Q: When is “Bit-Banging” acceptable in a modern system?

A: Bit-banging (manually toggling GPIOs to emulate a protocol like I2C or SPI) is a last resort. It consumes 100% of the CPU, is sensitive to interrupt jitter, and is power-inefficient. An architect should always prefer using a dedicated Hardware IP/Controller with DMA. Bit-banging is only acceptable during early “Bring-up” when the hardware controller is broken or for extremely slow, non-critical signals.


5. Summary Table: Bitwise Best Practices

Method Pro Con
Bit-Fields Clean, readable syntax. Non-portable, compiler-dependent.
Manual Masking Guaranteed portability. Error-prone (easy to miscalculate masks).
std::bitset (C++) Type-safe, high-level. Overhead; often not suitable for MMIO.
Atomic Bitwise Thread-safe updates. Slower (requires bus locking).

Architect’s Interview Tip

When discussing bit manipulation, mention Read-Modify-Write (RMW). Explain that when you do REG |= (1U << n), the CPU actually performs three steps: read the register, modify the bit, and write it back. If an interrupt or another core intervenes between the “read” and the “write,” you get a Race Condition. Mentioning that you use Atomic Bit-Set/Clear hardware registers (common in modern SoCs) shows you understand real-world hardware concurrency.


In the next article, we dive into the “Invisible Hand” of the build process: Linker Scripts and the Memory Map.

Ready for Article 8?

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Coding

The “Cost” of C++ – Virtual Functions, Vtables, and Memory

by dnaadmin March 30, 2026
written by dnaadmin

In the semiconductor and embedded world, there is a long-standing myth that C++ is “too heavy” for bare-metal systems. As a System Architect, your job is to debunk this by explaining exactly where the overhead lies. The most common interview question in this domain is: “How do virtual functions work, and what is their impact on performance and memory?”


1. The Mechanics: The Vtable and the Vptr

In C++, Polymorphism (deciding which function to call at runtime) is implemented using a Virtual Method Table (Vtable).

  • The Vtable: For every class that has at least one virtual function, the compiler creates a static array of function pointers.

  • The Vptr (Virtual Pointer): Every instance of that class gets a hidden pointer (usually 4 or 8 bytes) that points to the Vtable.

The Execution Flow:

  1. Follow the Vptr from the object instance to the Vtable.

  2. Look up the function pointer at the correct offset.

  3. Jump to that address.


2. The Hidden Costs: Memory and Speed

When an interviewer asks about the “cost,” they are looking for three specific points:

A. Memory Overhead (RAM/Flash)

  • Flash: Each class with virtual functions adds a Vtable to the .rodata segment.

  • RAM: Every object instance is now larger by 4 or 8 bytes because of the Vptr. In a system with thousands of small objects (like network packets), this adds up.

B. Execution Latency

A virtual call involves an extra memory load (dereferencing the Vptr) and an indirect jump. On modern CPUs, this can cause a Branch Prediction Miss, stalling the pipeline for several cycles.

C. Code Optimization (Inlining)

This is the “Architect’s Insight.” The biggest cost isn’t the jump; it’s that the compiler cannot inline a virtual function because it doesn’t know which function will be called until the program is actually running. This prevents many powerful compiler optimizations.


3. The “No Exceptions” Rule: -fno-exceptions

In many embedded C++ projects, the first thing an architect does is disable Exceptions and RTTI (Run-Time Type Information).

  • Why? Exception handling requires a massive “unwinding” library that bloats the binary size and adds non-deterministic timing.

  • The Interview Answer: “We use C++ for its type safety and RAII, but we disable exceptions (-fno-exceptions) and RTTI (-fno-rtti) to keep the binary lean and deterministic.”


4. Architect’s “Pro” Question: The “Virtual” Destructor

Q: Why must a base class have a virtual destructor if it has virtual functions?

A: If you delete a derived object through a base class pointer (Base *p = new Derived(); delete p;), and the destructor is NOT virtual, only the Base destructor is called. This leads to a Memory Leak because the Derived parts of the object are never cleaned up.


5. Summary Table: C++ Features in Embedded

Feature Cost Recommendation
Classes/Structs Zero Use freely (same as C).
Templates Flash Bloat Use carefully; can lead to “code explosion.”
Virtual Functions Memory + Latency Use for abstraction, but not in high-frequency loops.
Exceptions Massive Bloat Disable in bare-metal systems.
Name Mangling None Use extern "C" when calling from C code.

Architect’s Interview Tip

If asked about C++ vs C, mention Zero-Cost Abstractions. Explain that features like std::array, templates, and constexpr often result in code that is just as fast (or faster) than manual C, while providing better type safety. This shows you aren’t an “Anti-C++” dinosaur, but a pragmatist who knows which tools to use.


In the next article, we return to the bit-level: Bit Manipulation, Bit-Fields, and the Endianness Trap.

Ready for Article 7?

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Coding

The “Forbidden” Zone — Interrupt Service Routines (ISRs)

by dnaadmin March 30, 2026
written by dnaadmin

In a standard C++ interview for a desktop role, interrupts rarely come up. In a Systems Architect interview, they are the heartbeat of the discussion. An ISR is a function called directly by hardware, and it operates under a set of constraints so strict that breaking them almost guarantees a system hang or a non-deterministic crash.


1. The Execution Environment of an ISR

When an interrupt occurs, the CPU stops whatever it is doing, saves the current context (registers, program counter), and jumps to the address in the Interrupt Vector Table.

  • High IRQL: You are running at a high priority level. This means the scheduler is often disabled.

  • No Thread Context: The ISR doesn’t “belong” to any thread. You cannot use thread-local storage or assume a specific stack size.

  • Non-Reentrancy: By default, many systems disable other interrupts while one is running. If your ISR takes too long, you will “miss” other hardware events.


2. The “Golden Rules” of ISR Programming

This is what the interviewer is looking for. If you mention these, you pass the “Embedded 101” test.

A. Do NOT Block

You cannot call mutex_lock(), sleep(), or delay(). If the mutex is held by a lower-priority thread, and the ISR waits for it, you create a Deadlock because the lower-priority thread can never run to release the lock (since the ISR has higher priority).

B. Avoid Heavy I/O (printf)

printf() is often buffered and involves complex string formatting. It is notoriously slow and non-reentrant. In an ISR, use a fast, memory-mapped logging mechanism or toggle a GPIO pin for debugging.

C. No Dynamic Allocation (malloc / free)

The heap is shared. If the main thread was in the middle of a malloc() call when the interrupt fired, the heap’s internal data structures might be in an inconsistent state. Calling malloc() again from the ISR will cause memory corruption.

D. Keep it Short (The Top-Half/Bottom-Half Pattern)

An ISR should only do the bare minimum:

  1. Acknowledge the hardware interrupt.

  2. Read/Write the critical data to a buffer.

  3. Schedule a “Tasklet” or “Deferred Procedure Call” (DPC) to do the heavy processing later at a lower priority.


3. The volatile and static Connection

How do you share data between an ISR and your main() loop?

C

volatile bool data_ready = false; // Must be volatile!

void Timer_ISR() {
    data_ready = true;
}

int main() {
    while (!data_ready); // Busy wait
    // Process data
}

If data_ready is not volatile, the compiler might optimize the while loop into an infinite loop, assuming data_ready can never change because it’s not changed inside main().


4. Architect’s “Senior” Question: Floating Point in ISRs

Q: Can you perform floating-point math in an ISR?

A: Generally, No. On many architectures (like ARM Cortex-A), the Floating Point Unit (FPU) registers are not saved by default during an interrupt to save time. If an ISR uses the FPU, it will overwrite the floating-point calculations of the interrupted thread. If you must use it, you must manually save/restore the FPU state, which adds massive latency.


5. Summary Table: ISR Restrictions

Operation Allowed? Why?
Bitwise Ops Yes Fast, deterministic.
printf() No Slow, uses locks, non-reentrant.
malloc() No Heap corruption risk, non-deterministic.
Mutex Lock No Causes priority inversion and deadlocks.
Semaphores Only “Give” You can signal a task, but never “Take” (wait).

Architect’s Interview Tip

When talking about ISRs, mention Latency and Jitter. An architect’s job is to ensure that the “Interrupt Latency” (the time from the hardware signal to the first line of ISR code) is minimized. Mentioning that you use Tightly Coupled Memory (TCM) for the ISR code to avoid cache misses shows you are thinking at the silicon level.


In the next article, we move to the C++ side of the world: Object-Oriented Embedded C++ and the cost of Virtual Functions.

Ready for Article 6?

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
  • 1
  • 2
  • 3
  • 4

About Me

About Me

Shameer Mohammed, SoC Technologist

Shameer Mohammed believes that no topic is too complex if taught correctly. Backed by 21 years of industry experience launching Tier-1 chipsets and a solid foundation in Electronics and Communication Engineering, he has mastered the art of simplifying the complicated. His unique teaching style is scientifically grounded, designed to help students digest hard technical concepts and actually remember them. When he isn't decoding the secrets of silicon technologies, Shameer is exploring the inner workings of the human machine through his passion for Neuroscience and Bio-mechanics.

Keep in touch

Facebook Twitter Linkedin Youtube Github

Resources

  • The Geometry of Generalization: Understanding Why Neural Networks Work

    March 30, 2026
  • The Process of “Life” — Task Scheduling and the CFS

    March 30, 2026
  • Inline Functions, Macros, and the Preprocessor Pitfalls

    March 30, 2026

Recent Posts

  • The Geometry of Generalization: Understanding Why Neural Networks Work

    March 30, 2026
  • The Process of “Life” — Task Scheduling and the CFS

    March 30, 2026
  • Inline Functions, Macros, and the Preprocessor Pitfalls

    March 30, 2026
  • RAII and Smart Pointers — Managing Resources without a Garbage Collector

    March 30, 2026
  • The Linker Script – The Invisible Blueprint of Your System

    March 30, 2026

Categories

  • Blog (22)
  • Coding (10)
  • Debug (3)
  • Electronics (1)
  • Kernel Internals (1)
  • Machine Learning/AI (1)

Frontend

  • The Geometry of Generalization: Understanding Why Neural Networks Work

    March 30, 2026
  • The Process of “Life” — Task Scheduling and the CFS

    March 30, 2026
  • Inline Functions, Macros, and the Preprocessor Pitfalls

    March 30, 2026
  • RAII and Smart Pointers — Managing Resources without a Garbage Collector

    March 30, 2026

Subscribe Newsletter

  • Facebook
  • Twitter
  • Linkedin
  • Youtube
  • Email
  • Github
  • Stack-overflow