Memory-Safe Languages and the Future of Software Development
Modern programming languages are designed to protect us from critical runtime errors. Depending on the context, this can be essential, while in other situations, it might just be “nice to have.” However, no language can shield us from errors in business logic.
Nearly a year ago, the White House issued a call to adopt memory-safe programming languages to prevent future cyberattacks, as reported by TechRepublic and Stack Overflow. However, the linked White House pages are no longer accessible. In an NSA report from early 2023, the following languages were classified as memory-safe: Python, Java, C#, Go, Delphi/Object Pascal, Swift, Ruby, Rust, and Ada.
Admittedly, this list is not exhaustive, and the primary goal is to avoid using C and C++. But what would the consequences of this shift be?
We’re not talking about trivial projects here. The focus is on operating systems, network components, security-critical libraries, IoT, automotive systems, and more. These are domains where only a small fraction of software developers are active, and deep expertise is absolutely essential. If you already have solid knowledge of C, C++, or similarly powerful languages, you can certainly benefit from the capabilities of the languages mentioned above. However, if you lack this foundational knowledge and start your career directly with one of these languages, I see significant disadvantages and the potential for long-term issues.
Software development without tools like modern compilers, linters, or similar supportive utilities is unthinkable today. Rust, in particular, is relentless in forcing you to write safe code. This is further supported by the increasing use of AI for code generation, to the point where I might argue that the resulting code—while functionally flawless—is sometimes not even understood by the developers. And this is precisely the problem. The development environment prevents problematic code, but to achieve this, AI is relied upon because developers no longer learn the fundamentals of how such things work. In the end you are at the mercy of the tools and are left in the dark. What does this mean for maintainability?
Proper architecture seems to be fading away. One extreme trend chases the next, as seen in the sometimes absurd overuse of microservices, where many are thankfully now scaling back. If we return to the roots and understand the core problems in software development, ensuring clean and clear structures will inevitably lead to safer code. Of course, I still expect tools to support this process—I wouldn’t want to do without them. But the ability to write safe code yourself—and to understand the potential risks—must not be lost.
Tools are tools and must remain just that, even if they are integrated directly into the compiler. I don’t even want to imagine what would happen if a bug were to slip in here and no one would be able to recognize it. And if I may bring up Rust again, as I haven’t used another language that is similarly strict: if you need to use unsafe Rust or manually call an unwrap, it will be done. The former can certainly produce unsafe code, while the latter will at least lead to a crash in case of doubt.
Always be prepared for problems.