
コーダーがセキュリティを征服する OWASP トップ 10 API シリーズ-リソース不足とレート制限
With the lack of resources and rate limiting, API vulnerability acts almost exactly how it's described by the title. Every API has limited resources and computing power available to it depending on its environment. Most are also required to field requests from users or other programs asking it to perform its desired function. This vulnerability occurs when too many requests come in at the same time, and the API does not have enough computing resources to handle those requests. The API can then become unavailable or unresponsive to new requests.
APIs become vulnerable to this problem if their rate or resource limits are not set correctly, or if limits are left undefined in the code. An API can then be overloaded if, for example, a business experiences a particularly busy period. But it's also a security vulnerability, because threat actors can purposely overload unprotected APIs with requests in order to perform Denial of Service (DDoS) attacks.
By the way, how are you doing with the API gamified challenges so far? If you want to try your skills in handling a rate limiting vulnerability right now, step into the arena:
Now, let's go a little deeper.
What are some examples of the lack of resources and rate limiting API vulnerability?
There are two ways that this vulnerability can sneak into an API. The first is when a coder simply doesn't define what the throttle rates should be for an API. There might be a default setting for throttle rates somewhere in the infrastructure, but relying on that is not a good policy. Instead, each API should have its rates set individually. This is especially true because APIs can have vastly different functions as well as available resources.
For example, an internal API designed to serve just a few users could have a very low throttle rate and work just fine. But a public-facing API that is part of a live eCommerce site would most likely need an exceptionally high rate defined to compensate for the possibility of a surge in simultaneous users. In both cases, the throttling rates should be defined based on the expected needs, the number of potential users, and the available computing power.
It might be tempting, especially with APIs that will most likely be very busy, to set the rates to unlimited in order to try and maximize performance. This could be accomplished with a simple bit of code (as an example, we'll use the Python Django REST framework):
"DEFAULT_THROTTLE_RATES: {
"anon: None,
"user: None
In that example, both anonymous users and those known to the system can contact the API an unlimited number of times without regard to the number of requests over time. This is a bad idea because no matter how much computing resources an API has available, attackers can deploy things like botnets to eventually slow it to a crawl or possibly knock it offline altogether. When that happens, valid users will be denied access and the attack will be successful.
Eliminating Lack of Resources and Rate Limiting Problems
Every API that is deployed by an organization should have its throttle rates defined in its code. This could include things like execution timeouts, maximum allowable memory, the number of records per page that can be returned to a user, or the number of processes permitted within a defined timeframe.
From the above example, instead of leaving the throttling rates wide open, they could be tightly defined with different rates for anonymous and known users.
"DEFAULT_THROTTLE_RATES: {
"anon: config("THROTTLE_ANON, default=200/hour),
"user: config("THROTTLE_USER, default=5000/hour)
In the new example, the API would limit anonymous users to making 200 requests per hour. Known users who are already vetted by the system are given more leeway at 5,000 requests per hour. But even they are limited to prevent an accidental overload at peak times or to compensate if a user account is compromised and used for a denial of service attack.
As a final good practice to consider, it's a good idea to display a notification to users when they have reached the throttling limits along with an explanation as to when those limits will be reset. That way, valid users will know why an application is rejecting their requests. This can also be helpful if valid users doing approved tasks are denied access to an API because it can signal operations personnel that the throttling needs to be increased.
Check out the Secure Code Warrior blog pages for more insight about this vulnerability and how to protect your organization and customers from the ravages of other security flaws. You can also try a demo of the Secure Code Warrior training platform to keep all your cybersecurity skills honed and up-to-date.


この脆弱性は、同時に受信するリクエストが多すぎて、それらのリクエストを処理するための十分なコンピューティングリソースが API にない場合に発生します。そうすると、API が使用できなくなったり、新しいリクエストに応答しなくなったりする可能性があります。
マティアス・マドゥ博士は、セキュリティ専門家、研究者、CTO、セキュア・コード・ウォリアーの共同創設者です。Matias はゲント大学で静的分析ソリューションを中心にアプリケーションセキュリティの博士号を取得しました。その後、米国のFortifyに入社し、開発者が安全なコードを書くのを手伝わずに、コードの問題を検出するだけでは不十分であることに気づきました。これがきっかけで、開発者を支援し、セキュリティの負担を軽減し、顧客の期待を超える製品を開発するようになりました。Team Awesome の一員としてデスクにいないときは、RSA カンファレンス、BlackHat、DefCon などのカンファレンスでプレゼンテーションを行うステージでのプレゼンテーションを楽しんでいます。

Secure Code Warriorは、ソフトウェア開発ライフサイクル全体にわたってコードを保護し、サイバーセキュリティを最優先とする文化を築くお手伝いをします。アプリケーションセキュリティマネージャ、開発者、CISO、またはセキュリティ関係者のいずれであっても、安全でないコードに関連するリスクを軽減するお手伝いをします。
デモを予約マティアス・マドゥ博士は、セキュリティ専門家、研究者、CTO、セキュア・コード・ウォリアーの共同創設者です。Matias はゲント大学で静的分析ソリューションを中心にアプリケーションセキュリティの博士号を取得しました。その後、米国のFortifyに入社し、開発者が安全なコードを書くのを手伝わずに、コードの問題を検出するだけでは不十分であることに気づきました。これがきっかけで、開発者を支援し、セキュリティの負担を軽減し、顧客の期待を超える製品を開発するようになりました。Team Awesome の一員としてデスクにいないときは、RSA カンファレンス、BlackHat、DefCon などのカンファレンスでプレゼンテーションを行うステージでのプレゼンテーションを楽しんでいます。
Matiasは、15年以上のソフトウェアセキュリティの実務経験を持つ研究者および開発者です。フォーティファイ・ソフトウェアや自身の会社であるセンセイ・セキュリティなどの企業向けにソリューションを開発してきました。マティアスはキャリアを通じて、複数のアプリケーションセキュリティ研究プロジェクトを主導し、それが商用製品につながり、10件以上の特許を取得しています。デスクから離れているときには、マティアスは上級アプリケーション・セキュリティ・トレーニング・コースの講師を務め、RSA Conference、Black Hat、DefCon、BSIMM、OWASP AppSec、BruConなどのグローバルカンファレンスで定期的に講演を行っています。
マティアスはゲント大学でコンピューター工学の博士号を取得し、そこでアプリケーションの内部動作を隠すためのプログラムの難読化によるアプリケーションセキュリティを学びました。


With the lack of resources and rate limiting, API vulnerability acts almost exactly how it's described by the title. Every API has limited resources and computing power available to it depending on its environment. Most are also required to field requests from users or other programs asking it to perform its desired function. This vulnerability occurs when too many requests come in at the same time, and the API does not have enough computing resources to handle those requests. The API can then become unavailable or unresponsive to new requests.
APIs become vulnerable to this problem if their rate or resource limits are not set correctly, or if limits are left undefined in the code. An API can then be overloaded if, for example, a business experiences a particularly busy period. But it's also a security vulnerability, because threat actors can purposely overload unprotected APIs with requests in order to perform Denial of Service (DDoS) attacks.
By the way, how are you doing with the API gamified challenges so far? If you want to try your skills in handling a rate limiting vulnerability right now, step into the arena:
Now, let's go a little deeper.
What are some examples of the lack of resources and rate limiting API vulnerability?
There are two ways that this vulnerability can sneak into an API. The first is when a coder simply doesn't define what the throttle rates should be for an API. There might be a default setting for throttle rates somewhere in the infrastructure, but relying on that is not a good policy. Instead, each API should have its rates set individually. This is especially true because APIs can have vastly different functions as well as available resources.
For example, an internal API designed to serve just a few users could have a very low throttle rate and work just fine. But a public-facing API that is part of a live eCommerce site would most likely need an exceptionally high rate defined to compensate for the possibility of a surge in simultaneous users. In both cases, the throttling rates should be defined based on the expected needs, the number of potential users, and the available computing power.
It might be tempting, especially with APIs that will most likely be very busy, to set the rates to unlimited in order to try and maximize performance. This could be accomplished with a simple bit of code (as an example, we'll use the Python Django REST framework):
"DEFAULT_THROTTLE_RATES: {
"anon: None,
"user: None
In that example, both anonymous users and those known to the system can contact the API an unlimited number of times without regard to the number of requests over time. This is a bad idea because no matter how much computing resources an API has available, attackers can deploy things like botnets to eventually slow it to a crawl or possibly knock it offline altogether. When that happens, valid users will be denied access and the attack will be successful.
Eliminating Lack of Resources and Rate Limiting Problems
Every API that is deployed by an organization should have its throttle rates defined in its code. This could include things like execution timeouts, maximum allowable memory, the number of records per page that can be returned to a user, or the number of processes permitted within a defined timeframe.
From the above example, instead of leaving the throttling rates wide open, they could be tightly defined with different rates for anonymous and known users.
"DEFAULT_THROTTLE_RATES: {
"anon: config("THROTTLE_ANON, default=200/hour),
"user: config("THROTTLE_USER, default=5000/hour)
In the new example, the API would limit anonymous users to making 200 requests per hour. Known users who are already vetted by the system are given more leeway at 5,000 requests per hour. But even they are limited to prevent an accidental overload at peak times or to compensate if a user account is compromised and used for a denial of service attack.
As a final good practice to consider, it's a good idea to display a notification to users when they have reached the throttling limits along with an explanation as to when those limits will be reset. That way, valid users will know why an application is rejecting their requests. This can also be helpful if valid users doing approved tasks are denied access to an API because it can signal operations personnel that the throttling needs to be increased.
Check out the Secure Code Warrior blog pages for more insight about this vulnerability and how to protect your organization and customers from the ravages of other security flaws. You can also try a demo of the Secure Code Warrior training platform to keep all your cybersecurity skills honed and up-to-date.

With the lack of resources and rate limiting, API vulnerability acts almost exactly how it's described by the title. Every API has limited resources and computing power available to it depending on its environment. Most are also required to field requests from users or other programs asking it to perform its desired function. This vulnerability occurs when too many requests come in at the same time, and the API does not have enough computing resources to handle those requests. The API can then become unavailable or unresponsive to new requests.
APIs become vulnerable to this problem if their rate or resource limits are not set correctly, or if limits are left undefined in the code. An API can then be overloaded if, for example, a business experiences a particularly busy period. But it's also a security vulnerability, because threat actors can purposely overload unprotected APIs with requests in order to perform Denial of Service (DDoS) attacks.
By the way, how are you doing with the API gamified challenges so far? If you want to try your skills in handling a rate limiting vulnerability right now, step into the arena:
Now, let's go a little deeper.
What are some examples of the lack of resources and rate limiting API vulnerability?
There are two ways that this vulnerability can sneak into an API. The first is when a coder simply doesn't define what the throttle rates should be for an API. There might be a default setting for throttle rates somewhere in the infrastructure, but relying on that is not a good policy. Instead, each API should have its rates set individually. This is especially true because APIs can have vastly different functions as well as available resources.
For example, an internal API designed to serve just a few users could have a very low throttle rate and work just fine. But a public-facing API that is part of a live eCommerce site would most likely need an exceptionally high rate defined to compensate for the possibility of a surge in simultaneous users. In both cases, the throttling rates should be defined based on the expected needs, the number of potential users, and the available computing power.
It might be tempting, especially with APIs that will most likely be very busy, to set the rates to unlimited in order to try and maximize performance. This could be accomplished with a simple bit of code (as an example, we'll use the Python Django REST framework):
"DEFAULT_THROTTLE_RATES: {
"anon: None,
"user: None
In that example, both anonymous users and those known to the system can contact the API an unlimited number of times without regard to the number of requests over time. This is a bad idea because no matter how much computing resources an API has available, attackers can deploy things like botnets to eventually slow it to a crawl or possibly knock it offline altogether. When that happens, valid users will be denied access and the attack will be successful.
Eliminating Lack of Resources and Rate Limiting Problems
Every API that is deployed by an organization should have its throttle rates defined in its code. This could include things like execution timeouts, maximum allowable memory, the number of records per page that can be returned to a user, or the number of processes permitted within a defined timeframe.
From the above example, instead of leaving the throttling rates wide open, they could be tightly defined with different rates for anonymous and known users.
"DEFAULT_THROTTLE_RATES: {
"anon: config("THROTTLE_ANON, default=200/hour),
"user: config("THROTTLE_USER, default=5000/hour)
In the new example, the API would limit anonymous users to making 200 requests per hour. Known users who are already vetted by the system are given more leeway at 5,000 requests per hour. But even they are limited to prevent an accidental overload at peak times or to compensate if a user account is compromised and used for a denial of service attack.
As a final good practice to consider, it's a good idea to display a notification to users when they have reached the throttling limits along with an explanation as to when those limits will be reset. That way, valid users will know why an application is rejecting their requests. This can also be helpful if valid users doing approved tasks are denied access to an API because it can signal operations personnel that the throttling needs to be increased.
Check out the Secure Code Warrior blog pages for more insight about this vulnerability and how to protect your organization and customers from the ravages of other security flaws. You can also try a demo of the Secure Code Warrior training platform to keep all your cybersecurity skills honed and up-to-date.

以下のリンクをクリックして、このリソースのPDFをダウンロードしてください。
Secure Code Warriorは、ソフトウェア開発ライフサイクル全体にわたってコードを保護し、サイバーセキュリティを最優先とする文化を築くお手伝いをします。アプリケーションセキュリティマネージャ、開発者、CISO、またはセキュリティ関係者のいずれであっても、安全でないコードに関連するリスクを軽減するお手伝いをします。
レポートを表示デモを予約マティアス・マドゥ博士は、セキュリティ専門家、研究者、CTO、セキュア・コード・ウォリアーの共同創設者です。Matias はゲント大学で静的分析ソリューションを中心にアプリケーションセキュリティの博士号を取得しました。その後、米国のFortifyに入社し、開発者が安全なコードを書くのを手伝わずに、コードの問題を検出するだけでは不十分であることに気づきました。これがきっかけで、開発者を支援し、セキュリティの負担を軽減し、顧客の期待を超える製品を開発するようになりました。Team Awesome の一員としてデスクにいないときは、RSA カンファレンス、BlackHat、DefCon などのカンファレンスでプレゼンテーションを行うステージでのプレゼンテーションを楽しんでいます。
Matiasは、15年以上のソフトウェアセキュリティの実務経験を持つ研究者および開発者です。フォーティファイ・ソフトウェアや自身の会社であるセンセイ・セキュリティなどの企業向けにソリューションを開発してきました。マティアスはキャリアを通じて、複数のアプリケーションセキュリティ研究プロジェクトを主導し、それが商用製品につながり、10件以上の特許を取得しています。デスクから離れているときには、マティアスは上級アプリケーション・セキュリティ・トレーニング・コースの講師を務め、RSA Conference、Black Hat、DefCon、BSIMM、OWASP AppSec、BruConなどのグローバルカンファレンスで定期的に講演を行っています。
マティアスはゲント大学でコンピューター工学の博士号を取得し、そこでアプリケーションの内部動作を隠すためのプログラムの難読化によるアプリケーションセキュリティを学びました。
With the lack of resources and rate limiting, API vulnerability acts almost exactly how it's described by the title. Every API has limited resources and computing power available to it depending on its environment. Most are also required to field requests from users or other programs asking it to perform its desired function. This vulnerability occurs when too many requests come in at the same time, and the API does not have enough computing resources to handle those requests. The API can then become unavailable or unresponsive to new requests.
APIs become vulnerable to this problem if their rate or resource limits are not set correctly, or if limits are left undefined in the code. An API can then be overloaded if, for example, a business experiences a particularly busy period. But it's also a security vulnerability, because threat actors can purposely overload unprotected APIs with requests in order to perform Denial of Service (DDoS) attacks.
By the way, how are you doing with the API gamified challenges so far? If you want to try your skills in handling a rate limiting vulnerability right now, step into the arena:
Now, let's go a little deeper.
What are some examples of the lack of resources and rate limiting API vulnerability?
There are two ways that this vulnerability can sneak into an API. The first is when a coder simply doesn't define what the throttle rates should be for an API. There might be a default setting for throttle rates somewhere in the infrastructure, but relying on that is not a good policy. Instead, each API should have its rates set individually. This is especially true because APIs can have vastly different functions as well as available resources.
For example, an internal API designed to serve just a few users could have a very low throttle rate and work just fine. But a public-facing API that is part of a live eCommerce site would most likely need an exceptionally high rate defined to compensate for the possibility of a surge in simultaneous users. In both cases, the throttling rates should be defined based on the expected needs, the number of potential users, and the available computing power.
It might be tempting, especially with APIs that will most likely be very busy, to set the rates to unlimited in order to try and maximize performance. This could be accomplished with a simple bit of code (as an example, we'll use the Python Django REST framework):
"DEFAULT_THROTTLE_RATES: {
"anon: None,
"user: None
In that example, both anonymous users and those known to the system can contact the API an unlimited number of times without regard to the number of requests over time. This is a bad idea because no matter how much computing resources an API has available, attackers can deploy things like botnets to eventually slow it to a crawl or possibly knock it offline altogether. When that happens, valid users will be denied access and the attack will be successful.
Eliminating Lack of Resources and Rate Limiting Problems
Every API that is deployed by an organization should have its throttle rates defined in its code. This could include things like execution timeouts, maximum allowable memory, the number of records per page that can be returned to a user, or the number of processes permitted within a defined timeframe.
From the above example, instead of leaving the throttling rates wide open, they could be tightly defined with different rates for anonymous and known users.
"DEFAULT_THROTTLE_RATES: {
"anon: config("THROTTLE_ANON, default=200/hour),
"user: config("THROTTLE_USER, default=5000/hour)
In the new example, the API would limit anonymous users to making 200 requests per hour. Known users who are already vetted by the system are given more leeway at 5,000 requests per hour. But even they are limited to prevent an accidental overload at peak times or to compensate if a user account is compromised and used for a denial of service attack.
As a final good practice to consider, it's a good idea to display a notification to users when they have reached the throttling limits along with an explanation as to when those limits will be reset. That way, valid users will know why an application is rejecting their requests. This can also be helpful if valid users doing approved tasks are denied access to an API because it can signal operations personnel that the throttling needs to be increased.
Check out the Secure Code Warrior blog pages for more insight about this vulnerability and how to protect your organization and customers from the ravages of other security flaws. You can also try a demo of the Secure Code Warrior training platform to keep all your cybersecurity skills honed and up-to-date.
目次
マティアス・マドゥ博士は、セキュリティ専門家、研究者、CTO、セキュア・コード・ウォリアーの共同創設者です。Matias はゲント大学で静的分析ソリューションを中心にアプリケーションセキュリティの博士号を取得しました。その後、米国のFortifyに入社し、開発者が安全なコードを書くのを手伝わずに、コードの問題を検出するだけでは不十分であることに気づきました。これがきっかけで、開発者を支援し、セキュリティの負担を軽減し、顧客の期待を超える製品を開発するようになりました。Team Awesome の一員としてデスクにいないときは、RSA カンファレンス、BlackHat、DefCon などのカンファレンスでプレゼンテーションを行うステージでのプレゼンテーションを楽しんでいます。

Secure Code Warriorは、ソフトウェア開発ライフサイクル全体にわたってコードを保護し、サイバーセキュリティを最優先とする文化を築くお手伝いをします。アプリケーションセキュリティマネージャ、開発者、CISO、またはセキュリティ関係者のいずれであっても、安全でないコードに関連するリスクを軽減するお手伝いをします。
デモを予約[ダウンロード]始めるためのリソース
Threat Modeling with AI: Turning Every Developer into a Threat Modeler
Walk away better equipped to help developers combine threat modeling ideas and techniques with the AI tools they're already using to strengthen security, improve collaboration, and build more resilient software from the start.




%20(1).avif)
.avif)
