This is the first piece in my Majors series. The whole point of the series is to give an intro-style overview of what different disciplines actually look like as undergraduate majors — what subfields they have, what kinds of career paths come out of them. For the first piece I want to talk about CS, walking through how my understanding of the field has shifted since I came to college, and how that compares to what I thought I knew in high school.
My interest in CS started as a kid playing with robots — writing simple code to make robots do deterministic tasks, the LEGO robotics kind. Later I did some algorithm contests and slowly picked up some competitive programming techniques. In high school I worked on a couple of small AI projects too, and that was basically my mental model of CS going into college. But that mental model fell apart almost immediately once I got to college. Looking back, what I had been exposed to was just the surface of CS — as a discipline it actually has many distinct areas with a wealth of interesting open problems. And honestly that’s the thing I admire most about it: unlike some disciplines that hold themselves at a remote, intimidating distance, CS uses different layers of abstraction to let people from all kinds of backgrounds participate and feel its appeal. Most readers of this article probably don’t understand how foundation models work under the hood, but they benefit from them every day. People with a bit more interest can dive deeper — using a coding agent to write code for a project, or calling an LLM API to automate a task. People with more background can keep going down the foundation-model line: fine-tuning for specific tasks, even pretraining, developing the relevant algorithms, working on the underlying mathematical theory, and so on. There’s a place in this field for everyone. And foundation models are just one currently popular subfield within AI; the discipline has many other branches and areas like systems and theory. This is what I mean when I say CS has both vertical accessibility and horizontal heterogeneity.
Next I want to talk about what subfields CS contains and roughly what each of them does. CS as a discipline is itself the highly coupled product of math, ECE, physics, linguistics, cognitive science, and more, so the boundaries between its subfields are often blurry. There’s no single right way to slice it — different perspectives produce different taxonomies — but the differences tend not to be huge. Here I’ll use the taxonomy from CSRankings and look at things from a research angle.
On CSRankings
Before getting started, let me briefly introduce CSRankings. The site is widely regarded within the CS community as a reasonable proxy for a school’s research strength. Its rankings are computed entirely from a weighted count of how many top-conference papers each school’s faculty publish in each area. The methodology is fairly objective, but it has its issues. First, it can’t distinguish groundbreaking work from incremental work — both count as one paper in its statistics (publication += 1) even though their actual impact can differ by orders of magnitude. Second, it counts conference rather than journal publications. (As an aside: the convention in CS is to publish in conferences over journals, because the field iterates so fast that journal review cycles, often several times longer than conference cycles, can’t keep up.) That convention puts at a major disadvantage the small set of professors and subfields — biocomputing, for example — that prefer journals. There’s also the fact that schools with more faculty have a built-in advantage: even if average faculty quality is the same, the school with more faculty will rack up more papers. But that’s not entirely bias, because more faculty does mean a thicker ecosystem. I’ll come back to these specific issues later in the school-selection chapter; for now, just keep in mind that CSRankings isn’t a perfect ranking but is still a useful reference for understanding any given school’s strength, and it works well as a guide to the field’s taxonomy.
OK, back to the main thread. On CSRankings, CS is grouped into roughly four big clusters: AI, Systems, Theory, and a set of Interdisciplinary areas. Each cluster is relatively self-contained — every area, and even every subfield within it, has its own community — but I’ve always felt these four clusters have a logical dependency: Theory is the most foundational layer, providing the mathematical grounding for algorithms and computation itself; Systems sits on top of Theory, building the actual hardware and software infrastructure that runs computation; AI extends Systems with applications that learn from data; and Interdisciplinary is where CS meets other disciplines (biology, economics, the arts, and so on), generally tilting toward applications. None of this is absolute, of course — AI itself has plenty of theoretical research directions, like learning theory. But I’ll go through them in this logical order below.
Theory: the mathematical foundation of CS
What Theory broadly does is study the mathematical properties of computation itself — what’s computable, what’s not, how much resource a computable problem requires, how to prove an algorithm is optimal. These are the bedrock of CS as a discipline, providing the language and tools we use to talk about computation. On CSRankings, theory is further split into three subfields: Algorithms & Complexity, Cryptography, and Logic & Verification.
Algorithms & Complexity is the most classical direction in theory. I remember from my algorithm-contest days that the most important thing to do before solving any problem was to look at the data scale and constraints to pick the right algorithm — that’s what got you within the time complexity the problem allowed. The spirit is the same in research, just with much more complex problems: algorithm research is about designing faster algorithms for specific problems (graph algorithms, approximation algorithms, online algorithms, randomized algorithms, etc.), and complexity is the inverse — studying the minimum resources (time, space, randomness, and so on) any problem in a given class requires, drawing a theoretical lower bound for algorithm design. The two lines push the upper bound down and the lower bound up, with the goal of meeting in the middle. For example, the lower bound for comparison-based sorting has been proven to be $\Omega(n \log n)$, and merge sort and heap sort happen to hit that bound, so sorting is essentially settled in the comparison model. But many harder problems — matrix multiplication, shortest paths — still have non-trivial gaps between their upper and lower bounds, and that’s exactly what the algorithm and complexity people are working on. The famous P vs NP problem belongs to complexity, and it’s been open for over fifty years — listed by the Clay Mathematics Institute as one of the seven Millennium Prize Problems.
Cryptography looks on the surface like an application of algorithms, but it actually has its own complete and independent theoretical framework. It studies how to guarantee the confidentiality, integrity, and authenticity of information in the presence of an adversary. The unique thing about this direction is that security definitions are always built on top of some complexity assumption — RSA’s security, for example, rests on the assumption that factoring large integers is hard. So cryptography and complexity are intrinsically entangled: you have to start from a hardness assumption before you can build a crypto scheme on top of it. Active topics in this area in recent years include post-quantum cryptography (worried that future quantum computers will break today’s mainstream crypto, so designing schemes that resist quantum attacks ahead of time), zero-knowledge proofs (letting one party prove they know some secret without revealing any additional information — the foundation of many blockchain systems), and multi-party computation, among others.
Logic & Verification is a somewhat smaller community, but the problems it tackles are very concrete: how do you use formal methods to mathematically prove that a piece of code or a system is correct? For an OS kernel, a compiler, or a distributed protocol, how do you guarantee that under all possible inputs it won’t crash, leak data, or produce a race condition? That’s what verification answers. The area has deep overlap with the PL (programming languages) community, since many verification tools are built on top of PL concepts like type systems and operational semantics. One of the better-known results in this area is seL4 — a fully formally verified microkernel that can be mathematically proven to satisfy its specification, which is why it gets used in high-assurance settings like military and aerospace systems.
This part may feel abstract to readers who haven’t encountered verification before, so let me use an algorithm-contest analogy. In contest programming and everyday software development, you typically check whether a piece of code is correct by writing a lot of test cases and running them, not by doing formal verification. Test cases are easy to understand: if I have a function $f(x) = x^2$, then input $2$ should give output $4$, input $3$ should give output $9$, and so on. So you just write a lot of test cases that check whether each input produces the expected output. The cost is much lower than formal verification, because testing only checks a finite set of specific inputs against expected outputs, rather than mathematically proving that the program meets its requirements for all possible inputs. But real-world problems are usually much more complex, and there will always be corner cases your test suite doesn’t cover. For most software, that’s not a big deal — products we use every day like Chrome contain plenty of known and unknown bugs, and the core logic of building them is fast iteration rather than perfection. It’s a tradeoff: users can put up with small annoyances and wait for the next update. But in domains like aviation, aerospace, or cryptocurrency, an unconsidered corner case can mean someone dies or someone loses a lot of money — and that’s when it’s worth paying many times the cost to do formal verification.
Chip design follows the same logic. Once a chip is taped out, you can’t patch it. Out of the millions of chips you produce, just one corner case being triggered can mean recalling the entire batch. A classic example is Intel’s 1994 Pentium FDIV bug — floating-point division gave wrong answers under certain extreme inputs, and Intel ended up spending nearly half a billion dollars on the recall. So before tape-out, modern chips usually go through extensive formal verification to prove the design satisfies its specification under all valid inputs.
Theory has a high bar for math background. The work mostly happens on a whiteboard rather than in an IDE, and papers are almost entirely proofs rather than experiments. So if you don’t genuinely enjoy reasoning and proof, going it alone in this direction can be tough. On the other hand, theory results have the longest shelf life in CS — a good algorithms paper can stay heavily cited for decades, whereas systems and AI iterate so fast that what was state-of-the-art five years ago might be irrelevant today.
Systems: making computation actually run
Systems is probably the largest cluster in CS by footprint. It covers nearly all the infrastructure that lets computation actually run on hardware, from chip design at the bottom up through OS, networking, databases, PL, SE, and so on. The research style is almost the opposite of theory: theory is formulas and proofs on paper, systems is hands-on. Almost every paper builds a real prototype and then measures its performance (latency, throughput, power, etc.), using empirical data to back up its claims. Systems papers therefore often come with a substantial codebase.
Continuing the algorithm-contest perspective: an algorithm tries to bring down the time complexity, while systems certainly also cares about making things run faster (i.e., optimizing the constant factor) — but it actually owns much messier territory than that, including how to keep multi-threaded code race-condition-free, how to avoid losing data when a machine crashes, what kind of API is good for the upper layer to use, and so on. Most of those concerns have nothing to do with raw speed, but they’re all systems’ responsibility.
CSRankings splits systems into about a dozen subfields. Below I’ll walk through a few of the more representative ones.
Computer Architecture is the part of systems closest to hardware, studying how the insides of CPUs, GPUs, TPUs, and similar chips should be designed: how the cache hierarchy is organized, RISC vs CISC instruction sets, how the pipeline is laid out, how to handle branch prediction and memory consistency, and so on. As Moore’s Law has been slowing down over the last few years, the gains from just stuffing in more transistors are basically gone, so the architecture line is increasingly focused on designing accelerators for specific workloads (deep learning, graph computation, cryptography, and so on). Google’s TPU and NVIDIA’s Tensor Cores are products of this trend.
Operating Systems mostly studies how the OS kernel should be designed: how to schedule processes, how to manage memory, how to handle I/O, how to implement file systems, and so on. Active directions in recent years include unikernels (merging an application with the kernel into a single-purpose binary to squeeze out performance), microkernels (splitting a traditional monolithic kernel into multiple user-space services for reliability and security), and OS redesigns for new hardware (persistent memory, SmartNICs, disaggregated memory, and so on).
Networking studies how data is transmitted reliably and efficiently between machines: from switch design within a LAN, to wide-area networks across data centers, to routing protocols on the Internet — all of it falls under this area. A lot of recent work has focused on data-center networking, because cloud computing and large-model training are pushing everyone to care more about achieving extremely low latency and non-blocking communication inside the data center.
High-Performance Computing studies how to scale a computation efficiently across hundreds, thousands, even tens of thousands of cores. The traditional applications are scientific computing — climate simulation, fluid dynamics, first-principles materials calculations — the kind of workload that runs on national supercomputers every day, with PDE solvers and large-scale linear algebra (the broader family of numerical methods) sitting underneath. Honestly, HPC feels to me like the most encompassing direction in systems: deeply intertwined with architecture and networking, and at the same time having to care about applied-math details like numerical stability. Plus, today’s large-model training runs directly on the infrastructure HPC has been developing for decades — GPU clusters, high-speed interconnects between nodes, collective communication — these have all been HPC’s bread and butter, so HPC is essentially the foundation underlying all of large-model training.
While I’m here, let me also mention numerical methods. Strictly speaking it’s a branch of applied math rather than a part of CS systems, but since its most common compute platform is HPC, the two communities work together a lot. Numerical methods addresses a fundamental mismatch between mathematics and computers: math itself is a precise language — $\pi$ has an exact definition — but computers can only store numbers approximately as floating-point values, so every operation introduces a small amount of error. At scale, those errors accumulate and, if not handled carefully, can render the entire result meaningless. Numerical methods studies how to design algorithms so that error stays bounded, convergence is provable, and efficiency is acceptable. This whole toolkit is also at the heart of large-model training. Training is essentially the accumulation of massive numbers of floating-point operations, and once numerical stability is mishandled the loss can spike and the model just diverges, costing tens of millions of dollars. That’s why so much of the recent large-model systems work — mixed precision (using lower-precision formats like FP16 / BF16 / FP8 for performance, which itself introduces stability issues), loss scaling (patching the gradient underflow problem of FP16), and increasingly sophisticated scaling strategies — sits at the intersection of numerical methods and systems.
Database studies how to store, index, and query large amounts of data. A modern database system has to handle a lot of problems: how to do concurrent transactions while maintaining ACID, how to distribute queries across many machines, how to optimize SQL query plans, how to handle streaming data, and so on. Active directions in recent years include in-memory databases, cloud-native databases (Snowflake, BigQuery, that kind of thing), and vector databases purpose-built for large-model retrieval.
Programming Languages studies the languages themselves: how the language is designed, how the type system is built, how the compiler translates high-level code into machine code. Different languages embody different design tradeoffs. C++, for instance — popular in algorithm contests and high-frequency trading — gives the user full control over memory and is fast, but it’s easy to write use-after-free or buffer overflow bugs. Java and Python use a garbage collector to manage memory, which is safer but adds runtime overhead. Rust has been on the rise lately because, with ownership and borrow checking, it does memory safety at compile time — you get the safety without GC, and you avoid the worst classes of C++ bugs. The PL community has deep overlap with the formal verification work mentioned earlier, so there’s a lot of cross-talk between the two.
Security spans a very wide range of layers: from low-level hardware security (Spectre, Meltdown, those kinds of side-channel attacks) to OS security and network security, all the way up to web security and ML security at the application layer. Almost every layer has its own attack model and defense mechanisms.
Software Engineering studies how to organize, test, and maintain large code bases. With the rise of AI, the area has become very active — program synthesis, automatic bug fixing, automatic test-case generation are all hot topics right now. Worth noting: Software Engineering as an academic research area has very little to do with the SWE (software engineer) job we usually talk about at big tech companies; they just happen to share a name. What SWE actually means in practice will get its own treatment later in the career-path chapter.
The problems systems researchers tackle mostly come from real engineering pain points, so academia and industry are unusually tightly connected in this area. That said, it’s not absolute. Software engineering, just mentioned, is a counterexample: the questions it cares about as a research topic — how to organize a large codebase, how to scale code review, how to design CI/CD pipelines — are questions on which companies like Google, by virtue of having enormous codebases, have accumulated far more first-hand experience than academia, and the relevant best practices typically come out of industry first. The unique contribution of academic software engineering researchers is to systematize these industry patterns, but the original problem-solving frontier really does sit in industry.
Systems demands the opposite kind of person from theory: it places a high bar on engineering ability, the work happens mostly in a terminal and a profiler rather than at a whiteboard, and papers are almost all benchmarks and measurements rather than mathematical proofs. So if you don’t genuinely enjoy writing code and wrestling with real hardware, the work can feel pretty tedious. On the flip side, systems also has one of the fastest paths to real-world impact in CS — a meaningful paper might get absorbed into industrial standard practice within just a few years, and during your PhD it’s quite possible to see a system you built yourself actually get deployed.
AI: from perception to decision-making
AI is the fastest-growing and most-talked-about direction in CS in recent years, so it hardly needs an introduction. But AI itself is split into many subfields too — let me briefly walk through them.
CSRankings divides AI into AI (general), Computer Vision, Machine Learning & Data Mining, Natural Language Processing, and Web & Information Retrieval. But honestly, ever since the transformer burst onto the scene in 2017 and then triggered the GenAI wave that followed, the boundaries between these subfields have grown blurrier and blurrier. Vision and NLP used to be relatively independent communities; now everyone uses the same backbone (transformer) and the same paradigm (pretrain + finetune), and multimodal models are taking over. So below I’ll go by the actual research landscape today rather than strictly following CSRankings’ categories.
Foundation Models are the hottest direction of the last couple of years. The core problem is how to train a model that generalizes across a wide range of tasks. Specific subproblems include architecture design (how to modify the attention mechanism, how to handle long context), training (data composition for pretraining, scaling laws, RLHF, and so on), inference (acceleration, quantization, speculative decoding), and evaluation (designing benchmarks that actually measure model capability). This direction is extremely compute-intensive — much of the frontier work can only be done in industry labs (OpenAI, Anthropic, Google DeepMind, and others), because academia rarely has access to GPU clusters at the scale required to train a frontier model.
Computer Vision studies how a model understands images and videos. Specific tasks include classification, detection, segmentation, generation, and so on. Before the foundation-model era, CV was a relatively self-contained direction with its own backbones (ResNet, ViT, and so on) and its own task suite. Today, vision is increasingly being absorbed into multimodal models. Generation is also very active right now — diffusion models and video generation are some of the hot areas.
Natural Language Processing studies how a model understands and generates natural language. The field has essentially been reshaped by LLMs. The classic NLP tasks — translation, summarization, question answering — are now downstream applications of LLMs, and so much of NLP research has shifted toward the capabilities and alignment of the LLMs themselves.
Reinforcement Learning studies how an agent learns an optimal policy through interaction with an environment. RL has had an interesting arc over the last decade: it was once most famous for game-playing (AlphaGo, AlphaStar), then went through a stretch of being seen as not particularly practical. But over the last couple of years RLHF has made it indispensable for training LLMs and brought it back into the spotlight. The recent reasoning models (OpenAI’s o-series, DeepSeek’s R-series) have pushed RL to the very center of the LLM training pipeline.
ML Theory studies the mathematical properties of ML: why deep networks generalize, what the optimization landscape looks like, why over-parameterized models don’t overfit, and so on. It has deep overlap with the theory cluster mentioned earlier, and requires a strong math background.
AI for Science is a relatively new cross-disciplinary direction that has emerged in the last few years, applying ML to specific scientific problems. The most famous example is DeepMind’s AlphaFold, which essentially solved protein structure prediction — an open problem in biology for fifty years — and earned the 2024 Nobel Prize in Chemistry as a result. Beyond that, AI for math, AI for materials science, and others are all getting more attention. As an aside: using AI to do the kind of formal verification mentioned earlier, in order to bring down its cost, is now one of the most important research questions within AI for math.
Honestly, AI feels to me more like the fusion of theory and systems, with a very wide range. Research in this area can lean very theoretical or very systems-y. So the bar for the people in it is unusually high. The very top AI researchers tend to be full-stack researchers strong in both math and engineering — they can derive scaling laws and other theoretical analyses, and at the same time get a full training pipeline running stably across thousands of GPUs. That kind of profile is rare in any other CS subfield, but in AI it’s almost the standard at top labs.
Separately, from a personal-development standpoint, AI is probably one of the best-paying and best-employment directions in CS right now. But it’s also extremely competitive and iterates extremely fast — a paper put up on arxiv can be obsolete three months later. Surviving in this direction requires a particular mindset: you have to keep up with the community’s pace, but you also have to keep your judgment in the middle of all the hype and not get swept along.
Interdisciplinary: where CS meets other fields
The last cluster is interdisciplinary — the territory where CS meets other disciplines. The subfields here mostly apply CS methods and tools to a specific problem domain, so working in this area generally requires a dual background. Below I’ll walk through a few representative directions.
Computational Biology / Bioinformatics is CS applied to biology. Specific tasks include genome sequencing, protein structure prediction, drug discovery, single-cell analysis, and so on; the AlphaFold mentioned earlier is a landmark result in this area. Because both the data scale and complexity in biology are growing very fast and ML tools keep getting stronger, this line is likely to be a growth area for a long time to come.
Computer Graphics studies how to use computers to generate, represent, and manipulate visual content — film effects, game rendering, 3D modeling, physical simulation all fall under this umbrella. With the development of VR/AR and the progress of generative models in recent years, the area has become more active too. Techniques like NeRF and Gaussian Splatting are good examples of successfully combining traditional graphics with modern ML.
Human-Computer Interaction studies the interaction between people and computers: UI/UX design, accessibility, AR/VR, novel input devices. HCI is one of the most user-facing directions in CS, so its research often pulls in user studies and psychology — methodologies that aren’t traditionally part of CS. With the explosion of GenAI, a lot of people have started studying interactions between AI and humans and AI’s social impact, all of which also fall within HCI.
Robotics studies how to make a physical agent perceive, reason, and act in the real world. The area naturally spans ML, control theory, mechanical engineering, and several other disciplines. With the recent progress in LLMs, using LLMs as the high-level planner for robots has become an active research direction.
Economics & Computation sits at the intersection of CS and economics, somewhat similar to Operations Research, with research questions including mechanism design (how to design an auction so that participants bid honestly, for example), algorithmic game theory, and market design. The line is also tightly connected to industry — the ad-bidding systems behind Google, Meta, and similar companies are backed by a lot of research in this area.
Visualization studies how to present high-dimensional or complex data in ways humans can understand. The direction is small but very important in scenarios like data science and scientific computing.
CS Education studies how CS should be taught. Specific research questions include language design (how to design a programming language friendly to beginners), pedagogy (how to teach abstraction, how to teach recursion), and access equity (how to bring more underrepresented groups into the discipline).
The interdisciplinary cluster as a whole offers a very wide menu, well-suited to people who, in addition to CS, have an interest in some specific domain. Another nice property: because the problems come from many sources and funding is more spread out, the area is relatively less affected by hype cycles in any single field.
Coming back to the question I opened with: what kind of discipline does CS turn out to be once you get to college? My biggest takeaway, personally, is that CS is much broader than the coding and algorithm contests I encountered in high school. It can be an extremely mathematical discipline (look at theory), an extremely engineering-oriented discipline (look at systems), and a discipline that crosses with almost any other field (even law, philosophy, and so on). Everyone can find a direction that fits their background and interests, and that’s the real meaning of horizontal heterogeneity that I mentioned at the beginning.