Secure coding technique: Securely deleting files
Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.


Data remanence is the residual physical representation of data that has been in some way erased.
Application Security Researcher - R&D Engineer - PhD Candidate

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoApplication Security Researcher - R&D Engineer - PhD Candidate


Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.

Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.

Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoApplication Security Researcher - R&D Engineer - PhD Candidate
Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.
Table of contents
Application Security Researcher - R&D Engineer - PhD Candidate

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Secure by Design: Defining Best Practices, Enabling Developers and Benchmarking Preventative Security Outcomes
In this research paper, Secure Code Warrior co-founders, Pieter Danhieux and Dr. Matias Madou, Ph.D., along with expert contributors, Chris Inglis, Former US National Cyber Director (now Strategic Advisor to Paladin Capital Group), and Devin Lynch, Senior Director, Paladin Global Institute, will reveal key findings from over twenty in-depth interviews with enterprise security leaders including CISOs, a VP of Application Security, and software security professionals.
Benchmarking Security Skills: Streamlining Secure-by-Design in the Enterprise
Finding meaningful data on the success of Secure-by-Design initiatives is notoriously difficult. CISOs are often challenged when attempting to prove the return on investment (ROI) and business value of security program activities at both the people and company levels. Not to mention, it’s particularly difficult for enterprises to gain insights into how their organizations are benchmarked against current industry standards. The President’s National Cybersecurity Strategy challenged stakeholders to “embrace security and resilience by design.” The key to making Secure-by-Design initiatives work is not only giving developers the skills to ensure secure code, but also assuring the regulators that those skills are in place. In this presentation, we share a myriad of qualitative and quantitative data, derived from multiple primary sources, including internal data points collected from over 250,000 developers, data-driven customer insights, and public studies. Leveraging this aggregation of data points, we aim to communicate a vision of the current state of Secure-by-Design initiatives across multiple verticals. The report details why this space is currently underutilized, the significant impact a successful upskilling program can have on cybersecurity risk mitigation, and the potential to eliminate categories of vulnerabilities from a codebase.
Secure code training topics & content
Our industry-leading content is always evolving to fit the ever changing software development landscape with your role in mind. Topics covering everything from AI to XQuery Injection, offered for a variety of roles from Architects and Engineers to Product Managers and QA. Get a sneak peak of what our content catalog has to offer by topic and role.
Resources to get you started
Revealed: How the Cyber Industry Defines Secure by Design
In our latest white paper, our Co-Founders, Pieter Danhieux and Dr. Matias Madou, Ph.D., sat down with over twenty enterprise security leaders, including CISOs, AppSec leaders and security professionals, to figure out the key pieces of this puzzle and uncover the reality behind the Secure by Design movement. It’s a shared ambition across the security teams, but no shared playbook.
Is Vibe Coding Going to Turn Your Codebase Into a Frat Party?
Vibe coding is like a college frat party, and AI is the centerpiece of all the festivities, the keg. It’s a lot of fun to let loose, get creative, and see where your imagination can take you, but after a few keg stands, drinking (or, using AI) in moderation is undoubtedly the safer long-term solution.