Secure coding technique: Securely deleting files

Published Sep 10, 2017
by Pieter De Cremer
cASE sTUDY

Secure coding technique: Securely deleting files

Published Sep 10, 2017
by Pieter De Cremer
View Resource
View Resource
tl;dr?

tl;dr?

Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.

Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.

Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.

However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.

The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.

Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.

Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.

https://fas.org/irp/nsa/rainbow/tg025-2.htm

View Resource
View Resource

Author

Pieter De Cremer

Want more?

Dive into onto our latest secure coding insights on the blog.

Our extensive resource library aims to empower the human approach to secure coding upskilling.

View Blog
Want more?

Get the latest research on developer-driven security

Our extensive resource library is full of helpful resources from whitepapers to webinars to get you started with developer-driven secure coding. Explore it now.

Resource Hub

Secure coding technique: Securely deleting files

Published Mar 07, 2023
By Pieter De Cremer

Deleting files on a computer system is tricky. Everybody, even your mother, has deleted a file too many before and has been happy to find it still in the trash and able to recover it.

Data in computer systems is represented by a sequence of bits. That means the system needs to do some bookkeeping within the file system to know which bits represent which file. Among this information is the size of the file, the time it was last modified, its owner, access permissions and so on. This bookkeeping data is stored separately from the contents of the file.

Usually, when a file is removed nothing happens to the bits representing the file, but the bookkeeping data is changed so that the system knows this part of the storage is now meaningless and can be reused. Until another file is saved in this location and the bits in this location are overwritten, you can often still recover the data that was saved. This not only improves the speed of deleting files but is often a useful feature to undo the deletion.

However, there are downsides to this approach. When an application on a computer system handles sensitive information it will save this data somewhere on the file system. At some point, when the information is no longer needed, this data may be deleted. If no extra care is taken this data may still be recoverable even though the intention of the developer was that all data was deleted.

The easiest way to completely erase that data is to rewrite the file content with random data (sometimes even several times over). There are several existing methods of secure file removal and they vary across storage types and file systems such as the Gutmann method. However, for day to day application use, these are a bit overkill and you can just overwrite the data yourself.

Be careful though! Do not use all zeros or other low entropy data. Many filesystems may optimize writing such sparse files and leave some of the original content. It is recommended to generate securely random data to overwrite the entire file contents before deleting the file itself.

Data remanence is the residual physical representation of data that has been in some way erased. After storage media is erased there may be some physical characteristics that allow data to be reconstructed.

https://fas.org/irp/nsa/rainbow/tg025-2.htm

Fill out the form to access the full report

We would like your permission to send you information on our products and/or related secure coding topics. We’ll always treat your personal details with the utmost care and will never sell them to other companies for marketing purposes.

To submit the form, please enable 'Analytics' cookies. Feel free to disable them again once you're done.