Blog

Rust is the most-loved programming language for the fifth time. Is it our new security savior?

Matias Madou, Ph.D.
Published Jun 18, 2020

For the past few years, it seems that software engineers all over the world just can't get enough of Rust for programming. This relatively new systems programming language, produced by Mozilla, has captured the hearts of the Stack Overflow community - and, as a cohort very unlikely to suffer fools, when they vote something the "most loved programming language" five years in a row, it's time we all sat up and took notice.

The Rust programming language incorporates known and functional elements from commonly used languages, working to a different philosophy that disposes of complexity, while introducing performance and safety. It's a learning curve, and many developers aren't getting the opportunity to play with it very much - just 5.1% of those surveyed on Stack Overflow commonly used it. Still, that aside, there is no denying that it's an exciting language, and one with a great deal more security firepower than its predecessors, like C and C++. Mass adoption is going to require some change, both behavioral and technological... but right now, it's still capturing the attention of devs on a theoretical level.  

... but hold up, we need to shine a light on one more thing: it's important to note that Rust is a programming language that prioritizes memory safety, and eradication of security bugs that are married to common memory management issues. Those are a big deal (and undoubtedly cause more than a few AppSec team migraines), but they are not the only secure coding challenges we face.

What does Rust prevent, exactly? And where are we still left exposed in the security landscape? Let's unpack the latest programming unicorn:

The new frontier of modern, memory-safe systems programming

Mozilla's research and development team have worked on some incredible projects, and investing in Rust programming as an open-source trailblazer is no exception. Their introductory video provides some insight into their ethos, with the key theme made very clear: the current approach to software security is flawed, and Rust is designed to solve much of that problem.

It seems too simplistic, especially since we face enormous data breaches every other day - just like the recent horrific blunder reported by EasyJet. Millions of data records are compromised frequently, almost always the work of a web application vulnerability, security misconfiguration, or phishing attack, and languages like C++ have existed for decades. However, this has not been enough time for developers to master them to the point of implementing secure coding best practices. Why should Rust be any different? New languages have come out before, and it's not like they've found a way to eradicate common vulnerabilities, or ensure any code written is magically perfect when compiled.

Simple though the concept may be,  sometimes it's the simple answers that conquer complex questions. Rust is, in all sense of the word, a revolution in memory-safe systems programming that delivers on its promises in many ways... and it certainly saves the bacon of developers who are susceptible to introducing errors that can cause big problems if undetected. Java, C, C++, and even newer languages like Kotlin and Golang, remain fairly unforgiving for the security-unaware developer. With those, there are no baked-in warnings, no particular signs that the awesome feature that has just been compiled has a security gremlin hiding under the hood.

So, let's dig deeper:

What makes Rust so secure?

Typically, a developer has the primary goal of building features, ensuring they are functional and user-friendly - perhaps even sources of pride they'd be happy to show off on their resume. It is entirely normal for a developer to create some great software, ship it, and move on to the next big project. At this point, security teams check for vulnerabilities, and, if found, their "finished" application might bounce back to their team for a hotfix. The problem may be simple, or it may be completely out of reasonable scope for a developer to remediate.

The issue is that on the surface level, the security bugs were not apparent at all, and if scanning, testing and manual code review fail to pick up on them, then an attacker can potentially use that small window of opportunity to exploit the bug.

Now, Rust seeks to stop many vulnerabilities from ever making it into the code in the first place: it simply won't compile if there are syntax errors, or other memory safety bugs that cause production issues all the way along the SDLC. This is memory-safe programming by design, ensuring there is no access to invalid memory (no matter how the software is executed). And with 70% of all security bugs the result of memory management-related issues, this is a great feat.

Rust will flag and prevent:

  • Buffer overflow
  • Use after free
  • Double-free
  • Null pointer dereference
  • Using uninitialized memory

If we compare a Rust code snippet with C++, it will become apparent that one is safe by default. Check out this example of a buffer overflow bug:

#include <iostream></iostream>
#include <string.h></string.h>
int main( void ) {
char a[3] = "12";
char b[4]= "123";
strcpy(a, b); // buffer overflow as len of b is greater than a
std::cout << a << "; " << b << std::endl;
}

Vs.

pub fn main() {
let mut a: [char; 2] = [1, 2];
let b: [char; 3] = [1, 2, 3];
a.copy_from_slice(&b);
}
Compare A Rust Code Snippet

Rust throws up a security warning, and panics upon reaching the copy_from_slice function in runtime to prevent buffer overflow, but not in compile time.

In that sense, it is very much one of the "start left" languages. It will highlight errors, and force-teach developers the right way to write code in order to avoid introducing memory-related security bugs, so meeting deadlines depends on the coder paying attention, remediating, and staying true to the delivery path.

The approach of this language seems simple, but it would have been an incredible feat to get it working with this powerful logic, and it does walk the walk. Rust is, from a security perspective, a giant leap forward... if only more people were using it. Companies like Dropbox are pioneering its use on a large, corporate scale, and that is great to see. But, there are more considerations before we jump to the conclusion that an adoption issue is all that is stopping us from a more secure future.

The Rust reckoning.

There are a couple of small (okay, big) issues, namely that programming in Rust has more flex to introduce bugs than it might appear. It will not fix the all-important OWASP Top 10 vulnerabilities that continue to cause breaches, delays, and a general culture of unsafe coding techniques. There is also something of an angel and devil dynamic, or, as it is more widely known: Safe Rust vs. Unsafe Rust.

As is explained in the official documentation, Safe Rust is the "true" form of Rust, and Unsafe Rust includes functions that are deemed "definitely not safe", although they are sometimes necessary - such as if integration with something in another language is required. However, even with Unsafe Rust, the list of additional functionalities is still limited. In Unsafe Rust, it is possible to do the following within unsafe blocks:

  • Dereference raw pointers
  • Call unsafe functions (including C functions, compiler intrinsics, and the raw allocator)
  • Implement unsafe traits
  • Mutate statics
  • Access fields of unions.

Even in a so-called "unsafe" mode, one of Rust programming's superpowers still functions: the "borrow checker". It generally prevents memory issues, collisions in parallel calculations and many other bugs through static code analysis, and this analysis will still make checks in an unsafe block - it just requires a lot more work to write unsafe constructs without the compiler stepping in with guidance in certain situations.

This doesn't seem like a huge issue for most experienced developers - after all, we're known to tinker to get the very best out of our applications and open up some cooler functions - but it potentially opens up a black hole that can lead to serious misconfigurations and security vulnerabilities: undefined behavior. Programming in Rust (even when used unsafely) locks down the possibilities of vulnerabilities fairly well as compared to C or C++, but invoking undefined behavior can be a risk.

Is this the end of reliance on developer-led secure coding?

Remember earlier when I said Rust has components of well-known languages? One of Rust's main security vulnerabilities is that, well, it has components of well-known languages - namely C.

Rust is still a "safe programming language", but again, the introduction of a user is where things can become unstuck. The developer can still tweak it to run without flagging errors (an attractive proposition, since this unlocks more capabilities), and essentially, even in a safe state, developers can still be as "unsafe" as they like, because they have a layer of guidance and protection before things can go really pear-shaped.

And both scenarios above become more dangerous as we dive deeper, as Rust's outcomes are similar to scanning tools - just as there is no Swiss Army SAST/DAST/RAST/IAST tool that scans for every vulnerability, every attack vector, and every problem, Rust doesn't either. Even with Rust some vulnerabilities can still be introduced quite easily.

The undefined behavior risk when running Unsafe Rust has the potential to open up integer overflow issues, while in general, even the safe configurations will not prevent human error in security misconfigurations, business logic, or using components with known vulnerabilities. These issues still pose a very real threat if left unpatched, and in an "assumed safe" environment like true Rust, it may even cause some complacent behavior if a coder believes all major issues will be picked up regardless.

I've found that Rust is not unlike a programming mentor - a senior engineer that has taken the time to sit there with a less experienced coder, reviewing their work and showing them potential bugs, pointing out efficiencies, and, in some cases, ensuring it isn't compiled until it's right. However, it is far better for Rust programmers to learn the theory and commit to best practices themselves, as that mentor might just cut the apron strings, and you don't want to be left hanging.

Ready to find and fix common Rust vulnerabilities right now? Play the challenge.
View Resource
View Resource

Rust incorporates known and functional elements from commonly used languages, working to a different philosophy that disposes of complexity, while introducing performance and safety.

Interested in more?

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demo
Share on:
Author
Matias Madou, Ph.D.
Published Jun 18, 2020

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.

Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.

Share on:

For the past few years, it seems that software engineers all over the world just can't get enough of Rust for programming. This relatively new systems programming language, produced by Mozilla, has captured the hearts of the Stack Overflow community - and, as a cohort very unlikely to suffer fools, when they vote something the "most loved programming language" five years in a row, it's time we all sat up and took notice.

The Rust programming language incorporates known and functional elements from commonly used languages, working to a different philosophy that disposes of complexity, while introducing performance and safety. It's a learning curve, and many developers aren't getting the opportunity to play with it very much - just 5.1% of those surveyed on Stack Overflow commonly used it. Still, that aside, there is no denying that it's an exciting language, and one with a great deal more security firepower than its predecessors, like C and C++. Mass adoption is going to require some change, both behavioral and technological... but right now, it's still capturing the attention of devs on a theoretical level.  

... but hold up, we need to shine a light on one more thing: it's important to note that Rust is a programming language that prioritizes memory safety, and eradication of security bugs that are married to common memory management issues. Those are a big deal (and undoubtedly cause more than a few AppSec team migraines), but they are not the only secure coding challenges we face.

What does Rust prevent, exactly? And where are we still left exposed in the security landscape? Let's unpack the latest programming unicorn:

The new frontier of modern, memory-safe systems programming

Mozilla's research and development team have worked on some incredible projects, and investing in Rust programming as an open-source trailblazer is no exception. Their introductory video provides some insight into their ethos, with the key theme made very clear: the current approach to software security is flawed, and Rust is designed to solve much of that problem.

It seems too simplistic, especially since we face enormous data breaches every other day - just like the recent horrific blunder reported by EasyJet. Millions of data records are compromised frequently, almost always the work of a web application vulnerability, security misconfiguration, or phishing attack, and languages like C++ have existed for decades. However, this has not been enough time for developers to master them to the point of implementing secure coding best practices. Why should Rust be any different? New languages have come out before, and it's not like they've found a way to eradicate common vulnerabilities, or ensure any code written is magically perfect when compiled.

Simple though the concept may be,  sometimes it's the simple answers that conquer complex questions. Rust is, in all sense of the word, a revolution in memory-safe systems programming that delivers on its promises in many ways... and it certainly saves the bacon of developers who are susceptible to introducing errors that can cause big problems if undetected. Java, C, C++, and even newer languages like Kotlin and Golang, remain fairly unforgiving for the security-unaware developer. With those, there are no baked-in warnings, no particular signs that the awesome feature that has just been compiled has a security gremlin hiding under the hood.

So, let's dig deeper:

What makes Rust so secure?

Typically, a developer has the primary goal of building features, ensuring they are functional and user-friendly - perhaps even sources of pride they'd be happy to show off on their resume. It is entirely normal for a developer to create some great software, ship it, and move on to the next big project. At this point, security teams check for vulnerabilities, and, if found, their "finished" application might bounce back to their team for a hotfix. The problem may be simple, or it may be completely out of reasonable scope for a developer to remediate.

The issue is that on the surface level, the security bugs were not apparent at all, and if scanning, testing and manual code review fail to pick up on them, then an attacker can potentially use that small window of opportunity to exploit the bug.

Now, Rust seeks to stop many vulnerabilities from ever making it into the code in the first place: it simply won't compile if there are syntax errors, or other memory safety bugs that cause production issues all the way along the SDLC. This is memory-safe programming by design, ensuring there is no access to invalid memory (no matter how the software is executed). And with 70% of all security bugs the result of memory management-related issues, this is a great feat.

Rust will flag and prevent:

  • Buffer overflow
  • Use after free
  • Double-free
  • Null pointer dereference
  • Using uninitialized memory

If we compare a Rust code snippet with C++, it will become apparent that one is safe by default. Check out this example of a buffer overflow bug:

#include <iostream></iostream>
#include <string.h></string.h>
int main( void ) {
char a[3] = "12";
char b[4]= "123";
strcpy(a, b); // buffer overflow as len of b is greater than a
std::cout << a << "; " << b << std::endl;
}

Vs.

pub fn main() {
let mut a: [char; 2] = [1, 2];
let b: [char; 3] = [1, 2, 3];
a.copy_from_slice(&b);
}
Compare A Rust Code Snippet

Rust throws up a security warning, and panics upon reaching the copy_from_slice function in runtime to prevent buffer overflow, but not in compile time.

In that sense, it is very much one of the "start left" languages. It will highlight errors, and force-teach developers the right way to write code in order to avoid introducing memory-related security bugs, so meeting deadlines depends on the coder paying attention, remediating, and staying true to the delivery path.

The approach of this language seems simple, but it would have been an incredible feat to get it working with this powerful logic, and it does walk the walk. Rust is, from a security perspective, a giant leap forward... if only more people were using it. Companies like Dropbox are pioneering its use on a large, corporate scale, and that is great to see. But, there are more considerations before we jump to the conclusion that an adoption issue is all that is stopping us from a more secure future.

The Rust reckoning.

There are a couple of small (okay, big) issues, namely that programming in Rust has more flex to introduce bugs than it might appear. It will not fix the all-important OWASP Top 10 vulnerabilities that continue to cause breaches, delays, and a general culture of unsafe coding techniques. There is also something of an angel and devil dynamic, or, as it is more widely known: Safe Rust vs. Unsafe Rust.

As is explained in the official documentation, Safe Rust is the "true" form of Rust, and Unsafe Rust includes functions that are deemed "definitely not safe", although they are sometimes necessary - such as if integration with something in another language is required. However, even with Unsafe Rust, the list of additional functionalities is still limited. In Unsafe Rust, it is possible to do the following within unsafe blocks:

  • Dereference raw pointers
  • Call unsafe functions (including C functions, compiler intrinsics, and the raw allocator)
  • Implement unsafe traits
  • Mutate statics
  • Access fields of unions.

Even in a so-called "unsafe" mode, one of Rust programming's superpowers still functions: the "borrow checker". It generally prevents memory issues, collisions in parallel calculations and many other bugs through static code analysis, and this analysis will still make checks in an unsafe block - it just requires a lot more work to write unsafe constructs without the compiler stepping in with guidance in certain situations.

This doesn't seem like a huge issue for most experienced developers - after all, we're known to tinker to get the very best out of our applications and open up some cooler functions - but it potentially opens up a black hole that can lead to serious misconfigurations and security vulnerabilities: undefined behavior. Programming in Rust (even when used unsafely) locks down the possibilities of vulnerabilities fairly well as compared to C or C++, but invoking undefined behavior can be a risk.

Is this the end of reliance on developer-led secure coding?

Remember earlier when I said Rust has components of well-known languages? One of Rust's main security vulnerabilities is that, well, it has components of well-known languages - namely C.

Rust is still a "safe programming language", but again, the introduction of a user is where things can become unstuck. The developer can still tweak it to run without flagging errors (an attractive proposition, since this unlocks more capabilities), and essentially, even in a safe state, developers can still be as "unsafe" as they like, because they have a layer of guidance and protection before things can go really pear-shaped.

And both scenarios above become more dangerous as we dive deeper, as Rust's outcomes are similar to scanning tools - just as there is no Swiss Army SAST/DAST/RAST/IAST tool that scans for every vulnerability, every attack vector, and every problem, Rust doesn't either. Even with Rust some vulnerabilities can still be introduced quite easily.

The undefined behavior risk when running Unsafe Rust has the potential to open up integer overflow issues, while in general, even the safe configurations will not prevent human error in security misconfigurations, business logic, or using components with known vulnerabilities. These issues still pose a very real threat if left unpatched, and in an "assumed safe" environment like true Rust, it may even cause some complacent behavior if a coder believes all major issues will be picked up regardless.

I've found that Rust is not unlike a programming mentor - a senior engineer that has taken the time to sit there with a less experienced coder, reviewing their work and showing them potential bugs, pointing out efficiencies, and, in some cases, ensuring it isn't compiled until it's right. However, it is far better for Rust programmers to learn the theory and commit to best practices themselves, as that mentor might just cut the apron strings, and you don't want to be left hanging.

Ready to find and fix common Rust vulnerabilities right now? Play the challenge.
View Resource
View Resource

Fill out the form below to download the report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

Submit
To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.

For the past few years, it seems that software engineers all over the world just can't get enough of Rust for programming. This relatively new systems programming language, produced by Mozilla, has captured the hearts of the Stack Overflow community - and, as a cohort very unlikely to suffer fools, when they vote something the "most loved programming language" five years in a row, it's time we all sat up and took notice.

The Rust programming language incorporates known and functional elements from commonly used languages, working to a different philosophy that disposes of complexity, while introducing performance and safety. It's a learning curve, and many developers aren't getting the opportunity to play with it very much - just 5.1% of those surveyed on Stack Overflow commonly used it. Still, that aside, there is no denying that it's an exciting language, and one with a great deal more security firepower than its predecessors, like C and C++. Mass adoption is going to require some change, both behavioral and technological... but right now, it's still capturing the attention of devs on a theoretical level.  

... but hold up, we need to shine a light on one more thing: it's important to note that Rust is a programming language that prioritizes memory safety, and eradication of security bugs that are married to common memory management issues. Those are a big deal (and undoubtedly cause more than a few AppSec team migraines), but they are not the only secure coding challenges we face.

What does Rust prevent, exactly? And where are we still left exposed in the security landscape? Let's unpack the latest programming unicorn:

The new frontier of modern, memory-safe systems programming

Mozilla's research and development team have worked on some incredible projects, and investing in Rust programming as an open-source trailblazer is no exception. Their introductory video provides some insight into their ethos, with the key theme made very clear: the current approach to software security is flawed, and Rust is designed to solve much of that problem.

It seems too simplistic, especially since we face enormous data breaches every other day - just like the recent horrific blunder reported by EasyJet. Millions of data records are compromised frequently, almost always the work of a web application vulnerability, security misconfiguration, or phishing attack, and languages like C++ have existed for decades. However, this has not been enough time for developers to master them to the point of implementing secure coding best practices. Why should Rust be any different? New languages have come out before, and it's not like they've found a way to eradicate common vulnerabilities, or ensure any code written is magically perfect when compiled.

Simple though the concept may be,  sometimes it's the simple answers that conquer complex questions. Rust is, in all sense of the word, a revolution in memory-safe systems programming that delivers on its promises in many ways... and it certainly saves the bacon of developers who are susceptible to introducing errors that can cause big problems if undetected. Java, C, C++, and even newer languages like Kotlin and Golang, remain fairly unforgiving for the security-unaware developer. With those, there are no baked-in warnings, no particular signs that the awesome feature that has just been compiled has a security gremlin hiding under the hood.

So, let's dig deeper:

What makes Rust so secure?

Typically, a developer has the primary goal of building features, ensuring they are functional and user-friendly - perhaps even sources of pride they'd be happy to show off on their resume. It is entirely normal for a developer to create some great software, ship it, and move on to the next big project. At this point, security teams check for vulnerabilities, and, if found, their "finished" application might bounce back to their team for a hotfix. The problem may be simple, or it may be completely out of reasonable scope for a developer to remediate.

The issue is that on the surface level, the security bugs were not apparent at all, and if scanning, testing and manual code review fail to pick up on them, then an attacker can potentially use that small window of opportunity to exploit the bug.

Now, Rust seeks to stop many vulnerabilities from ever making it into the code in the first place: it simply won't compile if there are syntax errors, or other memory safety bugs that cause production issues all the way along the SDLC. This is memory-safe programming by design, ensuring there is no access to invalid memory (no matter how the software is executed). And with 70% of all security bugs the result of memory management-related issues, this is a great feat.

Rust will flag and prevent:

  • Buffer overflow
  • Use after free
  • Double-free
  • Null pointer dereference
  • Using uninitialized memory

If we compare a Rust code snippet with C++, it will become apparent that one is safe by default. Check out this example of a buffer overflow bug:

#include <iostream></iostream>
#include <string.h></string.h>
int main( void ) {
char a[3] = "12";
char b[4]= "123";
strcpy(a, b); // buffer overflow as len of b is greater than a
std::cout << a << "; " << b << std::endl;
}

Vs.

pub fn main() {
let mut a: [char; 2] = [1, 2];
let b: [char; 3] = [1, 2, 3];
a.copy_from_slice(&b);
}
Compare A Rust Code Snippet

Rust throws up a security warning, and panics upon reaching the copy_from_slice function in runtime to prevent buffer overflow, but not in compile time.

In that sense, it is very much one of the "start left" languages. It will highlight errors, and force-teach developers the right way to write code in order to avoid introducing memory-related security bugs, so meeting deadlines depends on the coder paying attention, remediating, and staying true to the delivery path.

The approach of this language seems simple, but it would have been an incredible feat to get it working with this powerful logic, and it does walk the walk. Rust is, from a security perspective, a giant leap forward... if only more people were using it. Companies like Dropbox are pioneering its use on a large, corporate scale, and that is great to see. But, there are more considerations before we jump to the conclusion that an adoption issue is all that is stopping us from a more secure future.

The Rust reckoning.

There are a couple of small (okay, big) issues, namely that programming in Rust has more flex to introduce bugs than it might appear. It will not fix the all-important OWASP Top 10 vulnerabilities that continue to cause breaches, delays, and a general culture of unsafe coding techniques. There is also something of an angel and devil dynamic, or, as it is more widely known: Safe Rust vs. Unsafe Rust.

As is explained in the official documentation, Safe Rust is the "true" form of Rust, and Unsafe Rust includes functions that are deemed "definitely not safe", although they are sometimes necessary - such as if integration with something in another language is required. However, even with Unsafe Rust, the list of additional functionalities is still limited. In Unsafe Rust, it is possible to do the following within unsafe blocks:

  • Dereference raw pointers
  • Call unsafe functions (including C functions, compiler intrinsics, and the raw allocator)
  • Implement unsafe traits
  • Mutate statics
  • Access fields of unions.

Even in a so-called "unsafe" mode, one of Rust programming's superpowers still functions: the "borrow checker". It generally prevents memory issues, collisions in parallel calculations and many other bugs through static code analysis, and this analysis will still make checks in an unsafe block - it just requires a lot more work to write unsafe constructs without the compiler stepping in with guidance in certain situations.

This doesn't seem like a huge issue for most experienced developers - after all, we're known to tinker to get the very best out of our applications and open up some cooler functions - but it potentially opens up a black hole that can lead to serious misconfigurations and security vulnerabilities: undefined behavior. Programming in Rust (even when used unsafely) locks down the possibilities of vulnerabilities fairly well as compared to C or C++, but invoking undefined behavior can be a risk.

Is this the end of reliance on developer-led secure coding?

Remember earlier when I said Rust has components of well-known languages? One of Rust's main security vulnerabilities is that, well, it has components of well-known languages - namely C.

Rust is still a "safe programming language", but again, the introduction of a user is where things can become unstuck. The developer can still tweak it to run without flagging errors (an attractive proposition, since this unlocks more capabilities), and essentially, even in a safe state, developers can still be as "unsafe" as they like, because they have a layer of guidance and protection before things can go really pear-shaped.

And both scenarios above become more dangerous as we dive deeper, as Rust's outcomes are similar to scanning tools - just as there is no Swiss Army SAST/DAST/RAST/IAST tool that scans for every vulnerability, every attack vector, and every problem, Rust doesn't either. Even with Rust some vulnerabilities can still be introduced quite easily.

The undefined behavior risk when running Unsafe Rust has the potential to open up integer overflow issues, while in general, even the safe configurations will not prevent human error in security misconfigurations, business logic, or using components with known vulnerabilities. These issues still pose a very real threat if left unpatched, and in an "assumed safe" environment like true Rust, it may even cause some complacent behavior if a coder believes all major issues will be picked up regardless.

I've found that Rust is not unlike a programming mentor - a senior engineer that has taken the time to sit there with a less experienced coder, reviewing their work and showing them potential bugs, pointing out efficiencies, and, in some cases, ensuring it isn't compiled until it's right. However, it is far better for Rust programmers to learn the theory and commit to best practices themselves, as that mentor might just cut the apron strings, and you don't want to be left hanging.

Ready to find and fix common Rust vulnerabilities right now? Play the challenge.
Access resource

Click on the link below and download the PDF of this resource.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

View reportBook a demo
Share on:
Interested in more?

Share on:
Author
Matias Madou, Ph.D.
Published Jun 18, 2020

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Matias is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences including RSA Conference, Black Hat, DefCon, BSIMM, OWASP AppSec and BruCon.

Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.

Share on:

For the past few years, it seems that software engineers all over the world just can't get enough of Rust for programming. This relatively new systems programming language, produced by Mozilla, has captured the hearts of the Stack Overflow community - and, as a cohort very unlikely to suffer fools, when they vote something the "most loved programming language" five years in a row, it's time we all sat up and took notice.

The Rust programming language incorporates known and functional elements from commonly used languages, working to a different philosophy that disposes of complexity, while introducing performance and safety. It's a learning curve, and many developers aren't getting the opportunity to play with it very much - just 5.1% of those surveyed on Stack Overflow commonly used it. Still, that aside, there is no denying that it's an exciting language, and one with a great deal more security firepower than its predecessors, like C and C++. Mass adoption is going to require some change, both behavioral and technological... but right now, it's still capturing the attention of devs on a theoretical level.  

... but hold up, we need to shine a light on one more thing: it's important to note that Rust is a programming language that prioritizes memory safety, and eradication of security bugs that are married to common memory management issues. Those are a big deal (and undoubtedly cause more than a few AppSec team migraines), but they are not the only secure coding challenges we face.

What does Rust prevent, exactly? And where are we still left exposed in the security landscape? Let's unpack the latest programming unicorn:

The new frontier of modern, memory-safe systems programming

Mozilla's research and development team have worked on some incredible projects, and investing in Rust programming as an open-source trailblazer is no exception. Their introductory video provides some insight into their ethos, with the key theme made very clear: the current approach to software security is flawed, and Rust is designed to solve much of that problem.

It seems too simplistic, especially since we face enormous data breaches every other day - just like the recent horrific blunder reported by EasyJet. Millions of data records are compromised frequently, almost always the work of a web application vulnerability, security misconfiguration, or phishing attack, and languages like C++ have existed for decades. However, this has not been enough time for developers to master them to the point of implementing secure coding best practices. Why should Rust be any different? New languages have come out before, and it's not like they've found a way to eradicate common vulnerabilities, or ensure any code written is magically perfect when compiled.

Simple though the concept may be,  sometimes it's the simple answers that conquer complex questions. Rust is, in all sense of the word, a revolution in memory-safe systems programming that delivers on its promises in many ways... and it certainly saves the bacon of developers who are susceptible to introducing errors that can cause big problems if undetected. Java, C, C++, and even newer languages like Kotlin and Golang, remain fairly unforgiving for the security-unaware developer. With those, there are no baked-in warnings, no particular signs that the awesome feature that has just been compiled has a security gremlin hiding under the hood.

So, let's dig deeper:

What makes Rust so secure?

Typically, a developer has the primary goal of building features, ensuring they are functional and user-friendly - perhaps even sources of pride they'd be happy to show off on their resume. It is entirely normal for a developer to create some great software, ship it, and move on to the next big project. At this point, security teams check for vulnerabilities, and, if found, their "finished" application might bounce back to their team for a hotfix. The problem may be simple, or it may be completely out of reasonable scope for a developer to remediate.

The issue is that on the surface level, the security bugs were not apparent at all, and if scanning, testing and manual code review fail to pick up on them, then an attacker can potentially use that small window of opportunity to exploit the bug.

Now, Rust seeks to stop many vulnerabilities from ever making it into the code in the first place: it simply won't compile if there are syntax errors, or other memory safety bugs that cause production issues all the way along the SDLC. This is memory-safe programming by design, ensuring there is no access to invalid memory (no matter how the software is executed). And with 70% of all security bugs the result of memory management-related issues, this is a great feat.

Rust will flag and prevent:

  • Buffer overflow
  • Use after free
  • Double-free
  • Null pointer dereference
  • Using uninitialized memory

If we compare a Rust code snippet with C++, it will become apparent that one is safe by default. Check out this example of a buffer overflow bug:

#include <iostream></iostream>
#include <string.h></string.h>
int main( void ) {
char a[3] = "12";
char b[4]= "123";
strcpy(a, b); // buffer overflow as len of b is greater than a
std::cout << a << "; " << b << std::endl;
}

Vs.

pub fn main() {
let mut a: [char; 2] = [1, 2];
let b: [char; 3] = [1, 2, 3];
a.copy_from_slice(&b);
}
Compare A Rust Code Snippet

Rust throws up a security warning, and panics upon reaching the copy_from_slice function in runtime to prevent buffer overflow, but not in compile time.

In that sense, it is very much one of the "start left" languages. It will highlight errors, and force-teach developers the right way to write code in order to avoid introducing memory-related security bugs, so meeting deadlines depends on the coder paying attention, remediating, and staying true to the delivery path.

The approach of this language seems simple, but it would have been an incredible feat to get it working with this powerful logic, and it does walk the walk. Rust is, from a security perspective, a giant leap forward... if only more people were using it. Companies like Dropbox are pioneering its use on a large, corporate scale, and that is great to see. But, there are more considerations before we jump to the conclusion that an adoption issue is all that is stopping us from a more secure future.

The Rust reckoning.

There are a couple of small (okay, big) issues, namely that programming in Rust has more flex to introduce bugs than it might appear. It will not fix the all-important OWASP Top 10 vulnerabilities that continue to cause breaches, delays, and a general culture of unsafe coding techniques. There is also something of an angel and devil dynamic, or, as it is more widely known: Safe Rust vs. Unsafe Rust.

As is explained in the official documentation, Safe Rust is the "true" form of Rust, and Unsafe Rust includes functions that are deemed "definitely not safe", although they are sometimes necessary - such as if integration with something in another language is required. However, even with Unsafe Rust, the list of additional functionalities is still limited. In Unsafe Rust, it is possible to do the following within unsafe blocks:

  • Dereference raw pointers
  • Call unsafe functions (including C functions, compiler intrinsics, and the raw allocator)
  • Implement unsafe traits
  • Mutate statics
  • Access fields of unions.

Even in a so-called "unsafe" mode, one of Rust programming's superpowers still functions: the "borrow checker". It generally prevents memory issues, collisions in parallel calculations and many other bugs through static code analysis, and this analysis will still make checks in an unsafe block - it just requires a lot more work to write unsafe constructs without the compiler stepping in with guidance in certain situations.

This doesn't seem like a huge issue for most experienced developers - after all, we're known to tinker to get the very best out of our applications and open up some cooler functions - but it potentially opens up a black hole that can lead to serious misconfigurations and security vulnerabilities: undefined behavior. Programming in Rust (even when used unsafely) locks down the possibilities of vulnerabilities fairly well as compared to C or C++, but invoking undefined behavior can be a risk.

Is this the end of reliance on developer-led secure coding?

Remember earlier when I said Rust has components of well-known languages? One of Rust's main security vulnerabilities is that, well, it has components of well-known languages - namely C.

Rust is still a "safe programming language", but again, the introduction of a user is where things can become unstuck. The developer can still tweak it to run without flagging errors (an attractive proposition, since this unlocks more capabilities), and essentially, even in a safe state, developers can still be as "unsafe" as they like, because they have a layer of guidance and protection before things can go really pear-shaped.

And both scenarios above become more dangerous as we dive deeper, as Rust's outcomes are similar to scanning tools - just as there is no Swiss Army SAST/DAST/RAST/IAST tool that scans for every vulnerability, every attack vector, and every problem, Rust doesn't either. Even with Rust some vulnerabilities can still be introduced quite easily.

The undefined behavior risk when running Unsafe Rust has the potential to open up integer overflow issues, while in general, even the safe configurations will not prevent human error in security misconfigurations, business logic, or using components with known vulnerabilities. These issues still pose a very real threat if left unpatched, and in an "assumed safe" environment like true Rust, it may even cause some complacent behavior if a coder believes all major issues will be picked up regardless.

I've found that Rust is not unlike a programming mentor - a senior engineer that has taken the time to sit there with a less experienced coder, reviewing their work and showing them potential bugs, pointing out efficiencies, and, in some cases, ensuring it isn't compiled until it's right. However, it is far better for Rust programmers to learn the theory and commit to best practices themselves, as that mentor might just cut the apron strings, and you don't want to be left hanging.

Ready to find and fix common Rust vulnerabilities right now? Play the challenge.

Table of contents

View Resource
Interested in more?

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.

Book a demoDownload
Share on:
Resource hub

Resources to get you started

More posts
Resource hub

Resources to get you started

More posts