
Secure coding technique: Securely deleting files
Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.


Data remanence is the residual physical representation of data that has been in some way erased.
Application Security Researcher - R&D Engineer - PhD Candidate

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoApplication Security Researcher - R&D Engineer - PhD Candidate


Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.

Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.

Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoApplication Security Researcher - R&D Engineer - PhD Candidate
Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.
Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.
Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.
However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.
The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.
Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.
Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.
Table of contents
Application Security Researcher - R&D Engineer - PhD Candidate

Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Trust Agent:AI - Secure and scale AI-Drive development
AI is writing code. Who’s governing it? With up to 50% of AI-generated code containing security weaknesses, managing AI risk is critical. Discover how SCW's Trust Agent: AI provides the real-time visibility, proactive governance, and targeted upskilling needed to scale AI-driven development securely.
OpenText Application Security + Secure Code Warrior
OpenText Application Security and Secure Code Warrior combine vulnerability detection with AI Software Governance and developer capability. Together, they help organizations reduce risk, strengthen secure coding practices, and confidently adopt AI-driven development.
Resources to get you started
Equipping Developers for the Generative AI Era: AWS Collaboration
I am proud to announce that Secure Code Warrior has signed a strategic collaboration agreement with Amazon Web Services (AWS). Given the rapid evolution of the threat landscape, this strategic collaboration could not come at a more mission-critical moment for both security leaders and future-focused developers.
Securing the Future of Software: SCW and KnowBe4 Join Forces
I am thrilled to announce today an upcoming strategic partnership between Secure Code Warrior and KnowBe4. KnowBe4 is a world-renowned leader in comprehensively managing human and agentic AI risk, making them the perfect partner to help us distribute foundational security awareness to organizations across the globe.




