definitions for what is CVE worthy with downloads/installs and containers

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

definitions for what is CVE worthy with downloads/installs and containers

Kurt Seifried
So I've seen the classic "a CVE is for a security vulnerability, a security vulnerability is something that crosses a trust boundary". 

Obviously this is open to all sorts of interpretation, e.g. for passwords we can all agree a secret backdoor with a hard coded password is a CVE, but what about an app that has a default password that you are then forced to change once you login? What about an app that must be exposed to the network (introducing a race where an attacker can potentially get in first)? In general we have a good idea of where to draw the line for passwords (documented? changeable? is there a realistic secure way to deploy this products?). 

So first a quick story: my sons play Minecraft a lot, so I'm going to set them up a server. I found some software, setup of course is annoying (some weird dependencies that aren't packaged on my platforms of choice). So I thought "hey, let's find a docker container!" and luckily there are several:


You will note it has the line:

RUN cd PocketMine-MP && wget -q -O - http://cdn.pocketmine.net/installer.sh | bash -s - -v beta

which is a fancy way of saying "go get http://cdn.pocketmine.net/installer.sh and run it" luckily this is slightly mitigated by an earlier 

USER pocketmine

statement which means the command is running as a user and not root. But a quick search of github reveals:


which for example shows:


which does not downgrade to a user but instead runs the script as root. So at point do we draw a line in the sand for "downloads random stuff and runs it" as being CVE worthy? My thoughts:

To make it less CVE worthy:

1) Documents mentioning what this is doing and that it is dangerous 
2) Downgrading to less privileged users
3) Uses HTTPS to serve the content
4) Uses a well known/trusted site to serve the content


To make it more CVE worthy:

1) no documents/mention of what it is doing
2) Runs commands as a privileged user (e.g. root)
3) Uses HTTP to download content (and has no end to end signing/checks)
4) Uses basically random servers nobody has ever heard of
5) is widely used (e.g. for containers something in the Docker Registry)

For example a Dockerfile from Nginx:


TL;DR: They set the GPG key fingerprint as an env variable in the Dockerfile:

ENV GPG_KEYS B0F4253373F8F6F510D42178520A9993A1C052F8

They later download that key and use it to verify the nginx tarball they downloaded:

&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEYS" \
&& gpg --batch --verify nginx.tar.gz.asc nginx.tar.gz \

so they are definitely trying to do the right thing (I need to confirm that this will actually error out during build if the key isn't available/wrong key is served/asc signature is bad) and assuming it works as expected (an error triggers the Docker build to abort) then obviously this is safe and no need for a CVE. 

But most containers are not doing anything like this, not even close, and I suspect we need to start assigning CVE's as it looks like a lot of popular container Dockerfiles are very insecure with how they build software. 




--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: [hidden email]
Reply | Threaded
Open this post in threaded view
|

RE: definitions for what is CVE worthy with downloads/installs and containers

Common Vulnerabilities & Exposures

Kurt –

 

As you are well aware, CVE assignment is never an exact science. The following is a description of our current practice:

 

·         The question of whether it is "software acting exactly as it is designed" depends on who sends the CVE ID request. For example, it is plausible for a vendor's server to offer the same executable code (or update service) through both HTTP and HTTPS, and the URL hardcoded into a client-side product was -- by design -- supposed to start with https, but it started with http by accident. Thus, if it is a vendor-initiated request for a CVE ID to tag a required security update for their customers, then the CVE ID request is always accepted.

·         If the origin of the CVE ID request seems unrelated to the party that wrote the code, then (sometimes but not 100% of the time) the CVE ID request is rejected with a suggestion to consult with the vendor.

·         It would be hard to achieve 100% rejections, even if a CNA wanted to, because the person sending the CVE ID request may neglect to mention, or may be unwilling to mention, the precise nature of the problem. A large fraction of the population believes that it is always a vulnerability for any product to continuously make requests for executable code over unencrypted HTTP, with no other integrity protection, and execute code whenever a response is received. Because that much is obvious in their world view, their vulnerability description may focus on other details, such as file-format manipulation, etc.

·         Our prevailing opinion is that, for this HTTP/executable-code scenario, the best a CNA can do is assign CVE IDs in cases where they believe CVE consumers want those IDs to exist. If the requester sends a credible description of high exploitation likelihood, and there is no counterclaim from the vendor itself that this is "software acting exactly as it is designed," then it qualifies for a CVE ID.

 

This matches what happened for ASUS (the vendor refused to respond at all). If another requester does not describe exploitation likelihood or asserts that there is essentially no exploitation likelihood, and there is no clarification from the vendor, then the request can be rejected on the "software acting exactly as it is designed" grounds.

 

In other words, existence of a CVE ID should depend a little less on a comprehensive theory of what a vulnerability is, and depend a little more on judgment about whether the ID will help real-life organizations with risk management. This requires a little more work from the CNA, but makes CVE more useful than with either the 100% accept or 100% reject options.

 

Regards,

 

The CVE Team

 

 

 

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Kurt Seifried
Sent: Monday, June 06, 2016 12:18 PM
To: cve-editorial-board-list <[hidden email]>
Subject: definitions for what is CVE worthy with downloads/installs and containers

 

So I've seen the classic "a CVE is for a security vulnerability, a security vulnerability is something that crosses a trust boundary". 

 

Obviously this is open to all sorts of interpretation, e.g. for passwords we can all agree a secret backdoor with a hard coded password is a CVE, but what about an app that has a default password that you are then forced to change once you login? What about an app that must be exposed to the network (introducing a race where an attacker can potentially get in first)? In general we have a good idea of where to draw the line for passwords (documented? changeable? is there a realistic secure way to deploy this products?). 

 

So first a quick story: my sons play Minecraft a lot, so I'm going to set them up a server. I found some software, setup of course is annoying (some weird dependencies that aren't packaged on my platforms of choice). So I thought "hey, let's find a docker container!" and luckily there are several:

 

 

You will note it has the line:

 

RUN cd PocketMine-MP && wget -q -O - http://cdn.pocketmine.net/installer.sh | bash -s - -v beta

 

which is a fancy way of saying "go get http://cdn.pocketmine.net/installer.sh and run it" luckily this is slightly mitigated by an earlier 

 

USER pocketmine

 

statement which means the command is running as a user and not root. But a quick search of github reveals:

 

<a href="https://github.com/search?utf8=%E2%9C%93&amp;q=RUN&#43;bash&#43;wget&#43;&#43;http&amp;type=Code&amp;ref=searchresults">https://github.com/search?utf8=%E2%9C%93&q=RUN+bash+wget++http&type=Code&ref=searchresults

 

which for example shows:

 

 

which does not downgrade to a user but instead runs the script as root. So at point do we draw a line in the sand for "downloads random stuff and runs it" as being CVE worthy? My thoughts:

 

To make it less CVE worthy:

 

1) Documents mentioning what this is doing and that it is dangerous 

2) Downgrading to less privileged users

3) Uses HTTPS to serve the content

4) Uses a well known/trusted site to serve the content

 

 

To make it more CVE worthy:

 

1) no documents/mention of what it is doing

2) Runs commands as a privileged user (e.g. root)

3) Uses HTTP to download content (and has no end to end signing/checks)

4) Uses basically random servers nobody has ever heard of

5) is widely used (e.g. for containers something in the Docker Registry)

 

For example a Dockerfile from Nginx:

 

 

TL;DR: They set the GPG key fingerprint as an env variable in the Dockerfile:

 

ENV GPG_KEYS B0F4253373F8F6F510D42178520A9993A1C052F8

 

They later download that key and use it to verify the nginx tarball they downloaded:

 

            && gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEYS" \

            && gpg --batch --verify nginx.tar.gz.asc nginx.tar.gz \

 

so they are definitely trying to do the right thing (I need to confirm that this will actually error out during build if the key isn't available/wrong key is served/asc signature is bad) and assuming it works as expected (an error triggers the Docker build to abort) then obviously this is safe and no need for a CVE. 

 

But most containers are not doing anything like this, not even close, and I suspect we need to start assigning CVE's as it looks like a lot of popular container Dockerfiles are very insecure with how they build software. 

 

 

 

 

--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: 
[hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: definitions for what is CVE worthy with downloads/installs and containers

Kurt Seifried


On Tue, Jun 7, 2016 at 9:14 PM, Common Vulnerabilities & Exposures <[hidden email]> wrote:

Kurt –

 

As you are well aware, CVE assignment is never an exact science. The following is a description of our current practice:

 

·         The question of whether it is "software acting exactly as it is designed" depends on who sends the CVE ID request. For example, it is plausible for a vendor's server to offer the same executable code (or update service) through both HTTP and HTTPS, and the URL hardcoded into a client-side product was -- by design -- supposed to start with https, but it started with http by accident. Thus, if it is a vendor-initiated request for a CVE ID to tag a required security update for their customers, then the CVE ID request is always accepted.

·         If the origin of the CVE ID request seems unrelated to the party that wrote the code, then (sometimes but not 100% of the time) the CVE ID request is rejected with a suggestion to consult with the vendor.

·         It would be hard to achieve 100% rejections, even if a CNA wanted to, because the person sending the CVE ID request may neglect to mention, or may be unwilling to mention, the precise nature of the problem. A large fraction of the population believes that it is always a vulnerability for any product to continuously make requests for executable code over unencrypted HTTP, with no other integrity protection, and execute code whenever a response is received. Because that much is obvious in their world view, their vulnerability description may focus on other details, such as file-format manipulation, etc.

·         Our prevailing opinion is that, for this HTTP/executable-code scenario, the best a CNA can do is assign CVE IDs in cases where they believe CVE consumers want those IDs to exist. If the requester sends a credible description of high exploitation likelihood, and there is no counterclaim from the vendor itself that this is "software acting exactly as it is designed," then it qualifies for a CVE ID.


By definition if people are asking for CVE's for a security vulnerability they want them to exist. As well as a user of various Open Source and closed source products I want to be an informed consumer, the easiest way to do this currently is with CVEs (issues are consolidated in a single easily searched database, as opposed to many vendor sites which (intentionally?) make it hard to find security information about their products.
  

This matches what happened for ASUS (the vendor refused to respond at all). If another requester does not describe exploitation likelihood or asserts that there is essentially no exploitation likelihood, and there is no clarification from the vendor, then the request can be rejected on the "software acting exactly as it is designed" grounds.

 

In other words, existence of a CVE ID should depend a little less on a comprehensive theory of what a vulnerability is, and depend a little more on judgment about whether the ID will help real-life organizations with risk management. This requires a little more work from the CNA, but makes CVE more useful than with either the 100% accept or 100% reject options.


So for example we have KeePass 2 which refuses to fix their HTTP update check because it would cost the developer ad revenue:


so not only do we have a known security vulnerability, but we have a vendor flat out refusing to fix it, now I'm going to assume users of KeePass2 would like to know this, and I find it unlikely the vendor will inform them. As such a CVE (with it's resulting propagation to vulnerability management services) is one of the better ways to ensure people get notified. 

 

 

Regards,

 

The CVE Team

 

 

 


--

--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: [hidden email]
Reply | Threaded
Open this post in threaded view
|

RE: definitions for what is CVE worthy with downloads/installs and containers

Common Vulnerabilities & Exposures

Kurt –

 

We agree completely with you, and we provided CVE-2016-5119 to the requester last month.

 

Regards,

 

The CVE Team

 

From: Kurt Seifried [mailto:[hidden email]]
Sent: Wednesday, June 08, 2016 12:23 PM
To: Common Vulnerabilities & Exposures <[hidden email]>
Cc: cve-editorial-board-list <[hidden email]>
Subject: Re: definitions for what is CVE worthy with downloads/installs and containers

 

 

 

On Tue, Jun 7, 2016 at 9:14 PM, Common Vulnerabilities & Exposures <[hidden email]> wrote:

Kurt –

 

As you are well aware, CVE assignment is never an exact science. The following is a description of our current practice:

 

·         The question of whether it is "software acting exactly as it is designed" depends on who sends the CVE ID request. For example, it is plausible for a vendor's server to offer the same executable code (or update service) through both HTTP and HTTPS, and the URL hardcoded into a client-side product was -- by design -- supposed to start with https, but it started with http by accident. Thus, if it is a vendor-initiated request for a CVE ID to tag a required security update for their customers, then the CVE ID request is always accepted.

·         If the origin of the CVE ID request seems unrelated to the party that wrote the code, then (sometimes but not 100% of the time) the CVE ID request is rejected with a suggestion to consult with the vendor.

·         It would be hard to achieve 100% rejections, even if a CNA wanted to, because the person sending the CVE ID request may neglect to mention, or may be unwilling to mention, the precise nature of the problem. A large fraction of the population believes that it is always a vulnerability for any product to continuously make requests for executable code over unencrypted HTTP, with no other integrity protection, and execute code whenever a response is received. Because that much is obvious in their world view, their vulnerability description may focus on other details, such as file-format manipulation, etc.

·         Our prevailing opinion is that, for this HTTP/executable-code scenario, the best a CNA can do is assign CVE IDs in cases where they believe CVE consumers want those IDs to exist. If the requester sends a credible description of high exploitation likelihood, and there is no counterclaim from the vendor itself that this is "software acting exactly as it is designed," then it qualifies for a CVE ID.

 

By definition if people are asking for CVE's for a security vulnerability they want them to exist. As well as a user of various Open Source and closed source products I want to be an informed consumer, the easiest way to do this currently is with CVEs (issues are consolidated in a single easily searched database, as opposed to many vendor sites which (intentionally?) make it hard to find security information about their products.

  

This matches what happened for ASUS (the vendor refused to respond at all). If another requester does not describe exploitation likelihood or asserts that there is essentially no exploitation likelihood, and there is no clarification from the vendor, then the request can be rejected on the "software acting exactly as it is designed" grounds.

 

In other words, existence of a CVE ID should depend a little less on a comprehensive theory of what a vulnerability is, and depend a little more on judgment about whether the ID will help real-life organizations with risk management. This requires a little more work from the CNA, but makes CVE more useful than with either the 100% accept or 100% reject options.

 

So for example we have KeePass 2 which refuses to fix their HTTP update check because it would cost the developer ad revenue:

 

 

so not only do we have a known security vulnerability, but we have a vendor flat out refusing to fix it, now I'm going to assume users of KeePass2 would like to know this, and I find it unlikely the vendor will inform them. As such a CVE (with it's resulting propagation to vulnerability management services) is one of the better ways to ensure people get notified. 

 

 

 

Regards,

 

The CVE Team

 

 

 

 

--

 

--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: 
[hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: definitions for what is CVE worthy with downloads/installs and containers

Pascal Meunier
In reply to this post by Common Vulnerabilities & Exposures
I have difficulties with some statements:

"If the origin of the CVE ID request seems unrelated to the party that
wrote the code, then (sometimes but not 100% of the time) the CVE ID
request is rejected with a suggestion to consult with the vendor."

It can be very difficult to "consult with the vendor".  It's much, much
easier to just disclose the vulnerability without a CVE.  I'm afraid
that the above policy is a strong incentive against using CVE identifiers.

Also, I'm confused by the paragraph with the ASUS example as it seems to
contradict the preceding one.

Pascal



On 06/07/2016 11:14 PM, Common Vulnerabilities & Exposures wrote:

> Kurt –
>
> As you are well aware, CVE assignment is never an exact science. The following is a description of our current practice:
>
>
> ·         The question of whether it is "software acting exactly as it is designed" depends on who sends the CVE ID request. For example, it is plausible for a vendor's server to offer the same executable code (or update service) through both HTTP and HTTPS, and the URL hardcoded into a client-side product was -- by design -- supposed to start with https, but it started with http by accident. Thus, if it is a vendor-initiated request for a CVE ID to tag a required security update for their customers, then the CVE ID request is always accepted.
>
> ·         If the origin of the CVE ID request seems unrelated to the party that wrote the code, then (sometimes but not 100% of the time) the CVE ID request is rejected with a suggestion to consult with the vendor.
>
> ·         It would be hard to achieve 100% rejections, even if a CNA wanted to, because the person sending the CVE ID request may neglect to mention, or may be unwilling to mention, the precise nature of the problem. A large fraction of the population believes that it is always a vulnerability for any product to continuously make requests for executable code over unencrypted HTTP, with no other integrity protection, and execute code whenever a response is received. Because that much is obvious in their world view, their vulnerability description may focus on other details, such as file-format manipulation, etc.
>
> ·         Our prevailing opinion is that, for this HTTP/executable-code scenario, the best a CNA can do is assign CVE IDs in cases where they believe CVE consumers want those IDs to exist. If the requester sends a credible description of high exploitation likelihood, and there is no counterclaim from the vendor itself that this is "software acting exactly as it is designed," then it qualifies for a CVE ID.
>
> This matches what happened for ASUS (the vendor refused to respond at all). If another requester does not describe exploitation likelihood or asserts that there is essentially no exploitation likelihood, and there is no clarification from the vendor, then the request can be rejected on the "software acting exactly as it is designed" grounds.
>
> In other words, existence of a CVE ID should depend a little less on a comprehensive theory of what a vulnerability is, and depend a little more on judgment about whether the ID will help real-life organizations with risk management. This requires a little more work from the CNA, but makes CVE more useful than with either the 100% accept or 100% reject options.
>
> Regards,
>
> The CVE Team
>
>
>
>
> From: [hidden email] [mailto:[hidden email]] On Behalf Of Kurt Seifried
> Sent: Monday, June 06, 2016 12:18 PM
> To: cve-editorial-board-list <[hidden email]>
> Subject: definitions for what is CVE worthy with downloads/installs and containers
>
> So I've seen the classic "a CVE is for a security vulnerability, a security vulnerability is something that crosses a trust boundary".
>
> Obviously this is open to all sorts of interpretation, e.g. for passwords we can all agree a secret backdoor with a hard coded password is a CVE, but what about an app that has a default password that you are then forced to change once you login? What about an app that must be exposed to the network (introducing a race where an attacker can potentially get in first)? In general we have a good idea of where to draw the line for passwords (documented? changeable? is there a realistic secure way to deploy this products?).
>
> So first a quick story: my sons play Minecraft a lot, so I'm going to set them up a server. I found some software, setup of course is annoying (some weird dependencies that aren't packaged on my platforms of choice). So I thought "hey, let's find a docker container!" and luckily there are several:
>
> https://github.com/5t111111/docker-pocketmine-mp/blob/master/Dockerfile
>
> You will note it has the line:
>
> RUN cd PocketMine-MP && wget -q -O - http://cdn.pocketmine.net/installer.sh | bash -s - -v beta
>
> which is a fancy way of saying "go get http://cdn.pocketmine.net/installer.sh and run it" luckily this is slightly mitigated by an earlier
>
> USER pocketmine
>
> statement which means the command is running as a user and not root. But a quick search of github reveals:
>
> https://github.com/search?utf8=%E2%9C%93&q=RUN+bash+wget++http&type=Code&ref=searchresults
>
> which for example shows:
>
> https://github.com/wyvernnot/docker-minecraft-pe-server/blob/master/Dockerfile
>
> which does not downgrade to a user but instead runs the script as root. So at point do we draw a line in the sand for "downloads random stuff and runs it" as being CVE worthy? My thoughts:
>
> To make it less CVE worthy:
>
> 1) Documents mentioning what this is doing and that it is dangerous
> 2) Downgrading to less privileged users
> 3) Uses HTTPS to serve the content
> 4) Uses a well known/trusted site to serve the content
>
>
> To make it more CVE worthy:
>
> 1) no documents/mention of what it is doing
> 2) Runs commands as a privileged user (e.g. root)
> 3) Uses HTTP to download content (and has no end to end signing/checks)
> 4) Uses basically random servers nobody has ever heard of
> 5) is widely used (e.g. for containers something in the Docker Registry)
>
> For example a Dockerfile from Nginx:
>
> https://github.com/nginxinc/docker-nginx/blob/11fc019b2be3ad51ba5d097b1857a099c4056213/mainline/alpine/Dockerfile
>
> TL;DR: They set the GPG key fingerprint as an env variable in the Dockerfile:
>
> ENV GPG_KEYS B0F4253373F8F6F510D42178520A9993A1C052F8
>
> They later download that key and use it to verify the nginx tarball they downloaded:
>
>             && gpg --keyserver ha.pool.sks-keyservers.net<http://ha.pool.sks-keyservers.net> --recv-keys "$GPG_KEYS" \
>             && gpg --batch --verify nginx.tar.gz.asc nginx.tar.gz \
>
> so they are definitely trying to do the right thing (I need to confirm that this will actually error out during build if the key isn't available/wrong key is served/asc signature is bad) and assuming it works as expected (an error triggers the Docker build to abort) then obviously this is safe and no need for a CVE.
>
> But most containers are not doing anything like this, not even close, and I suspect we need to start assigning CVE's as it looks like a lot of popular container Dockerfiles are very insecure with how they build software.
>
>
>
>
> --
> Kurt Seifried -- Red Hat -- Product Security -- Cloud
> PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
> Red Hat Product Security contact: [hidden email]<mailto:[hidden email]>
>
Reply | Threaded
Open this post in threaded view
|

RE: definitions for what is CVE worthy with downloads/installs and containers

Common Vulnerabilities & Exposures
>-----Original Message-----
>From: [hidden email] [mailto:owner-cve-
>[hidden email]] On Behalf Of Pascal Meunier
>Sent: Wednesday, June 15, 2016 12:01 PM
>To: Common Vulnerabilities & Exposures <[hidden email]>; cve-editorial-
>board-list <[hidden email]>
>Subject: Re: definitions for what is CVE worthy with downloads/installs and
>containers
>
>I have difficulties with some statements:
>
>"If the origin of the CVE ID request seems unrelated to the party that
>wrote the code, then (sometimes but not 100% of the time) the CVE ID
>request is rejected with a suggestion to consult with the vendor."
>
>It can be very difficult to "consult with the vendor".  It's much, much
>easier to just disclose the vulnerability without a CVE.  I'm afraid
>that the above policy is a strong incentive against using CVE identifiers.
>
>Also, I'm confused by the paragraph with the ASUS example as it seems to
>contradict the preceding one.
>
>Pascal

Pascal -

Keep in mind that these statements were all made in the context of reports that a product uses an http: URL to reach executable code, and then executes that code. We currently do not want 100% of these reports to receive CVE IDs, and thus this situation is a special case. The "CVE ID request is rejected with a suggestion to consult with the vendor" outcome is not a universal CVE ID assignment practice; it only applies in a special case.

The rationale for not assigning CVE IDs to 100% is discussed in Kurt's
2016-06-06 message, e.g., "Documents mentioning what this is doing and that it is dangerous." For example, within a specific product, use of an http: URL to reach executable code may be documented, intentional, and unavoidable. One scenario is that the code is owned by a third party who operates only an http server, not an https server, and there may be no way to achieve desired product functionality without accepting the risk and proceeding with the http download. There are other relevant scenarios as well.

This leaves the question of what is an appropriate timeframe for allowing the affected vendor to respond in these cases. For example, http://www.symantec.com/security/OIS_Guidelines%20for%20responsible%20disclosure.pdf
suggests about 10 days to acknowledge receipt, etc.

We feel that the ASUS example is consistent with the rest of our 2016-06-07 message.
http://teletext.zaibatsutel.net/post/145370716258/deadupdate-or-how-i-learned-to-stop-worrying-and
has a timeline section showing that an attempt to consult with the vendor occurred for more than a month, with a final outcome of "No response from vendor." When there is no input from the vendor, only the CNA is involved in the decision about whether the product has a vulnerable behavior that CVE consumers may wish to track.

The CVE Team

>On 06/07/2016 11:14 PM, Common Vulnerabilities & Exposures wrote:
>> Kurt –
>>
>> As you are well aware, CVE assignment is never an exact science. The
>following is a description of our current practice:
>>
>>
>> ·         The question of whether it is "software acting exactly as it is designed"
>depends on who sends the CVE ID request. For example, it is plausible for a
>vendor's server to offer the same executable code (or update service)
>through both HTTP and HTTPS, and the URL hardcoded into a client-side
>product was -- by design -- supposed to start with https, but it started with
>http by accident. Thus, if it is a vendor-initiated request for a CVE ID to tag a
>required security update for their customers, then the CVE ID request is
>always accepted.
>>
>> ·         If the origin of the CVE ID request seems unrelated to the party that
>wrote the code, then (sometimes but not 100% of the time) the CVE ID
>request is rejected with a suggestion to consult with the vendor.
>>
>> ·         It would be hard to achieve 100% rejections, even if a CNA wanted to,
>because the person sending the CVE ID request may neglect to mention, or
>may be unwilling to mention, the precise nature of the problem. A large
>fraction of the population believes that it is always a vulnerability for any
>product to continuously make requests for executable code over
>unencrypted HTTP, with no other integrity protection, and execute code
>whenever a response is received. Because that much is obvious in their world
>view, their vulnerability description may focus on other details, such as file-
>format manipulation, etc.
>>
>> ·         Our prevailing opinion is that, for this HTTP/executable-code scenario,
>the best a CNA can do is assign CVE IDs in cases where they believe CVE
>consumers want those IDs to exist. If the requester sends a credible
>description of high exploitation likelihood, and there is no counterclaim from
>the vendor itself that this is "software acting exactly as it is designed," then it
>qualifies for a CVE ID.
>>
>> This matches what happened for ASUS (the vendor refused to respond at
>all). If another requester does not describe exploitation likelihood or asserts
>that there is essentially no exploitation likelihood, and there is no clarification
>from the vendor, then the request can be rejected on the "software acting
>exactly as it is designed" grounds.
>>
>> In other words, existence of a CVE ID should depend a little less on a
>comprehensive theory of what a vulnerability is, and depend a little more on
>judgment about whether the ID will help real-life organizations with risk
>management. This requires a little more work from the CNA, but makes CVE
>more useful than with either the 100% accept or 100% reject options.
>>
>> Regards,
>>
>> The CVE Team
>>
>>
>>
>>
>> From: [hidden email] [mailto:owner-cve-
>[hidden email]] On Behalf Of Kurt Seifried
>> Sent: Monday, June 06, 2016 12:18 PM
>> To: cve-editorial-board-list <[hidden email]>
>> Subject: definitions for what is CVE worthy with downloads/installs and
>containers
>>
>> So I've seen the classic "a CVE is for a security vulnerability, a security
>vulnerability is something that crosses a trust boundary".
>>
>> Obviously this is open to all sorts of interpretation, e.g. for passwords we
>can all agree a secret backdoor with a hard coded password is a CVE, but what
>about an app that has a default password that you are then forced to change
>once you login? What about an app that must be exposed to the network
>(introducing a race where an attacker can potentially get in first)? In general
>we have a good idea of where to draw the line for passwords (documented?
>changeable? is there a realistic secure way to deploy this products?).
>>
>> So first a quick story: my sons play Minecraft a lot, so I'm going to set them
>up a server. I found some software, setup of course is annoying (some weird
>dependencies that aren't packaged on my platforms of choice). So I thought
>"hey, let's find a docker container!" and luckily there are several:
>>
>> https://github.com/5t111111/docker-pocketmine-
>mp/blob/master/Dockerfile
>>
>> You will note it has the line:
>>
>> RUN cd PocketMine-MP && wget -q -O -
>http://cdn.pocketmine.net/installer.sh | bash -s - -v beta
>>
>> which is a fancy way of saying "go get
>http://cdn.pocketmine.net/installer.sh and run it" luckily this is slightly
>mitigated by an earlier
>>
>> USER pocketmine
>>
>> statement which means the command is running as a user and not root. But
>a quick search of github reveals:
>>
>>
>https://github.com/search?utf8=%E2%9C%93&q=RUN+bash+wget++http&ty
>pe=Code&ref=searchresults
>>
>> which for example shows:
>>
>> https://github.com/wyvernnot/docker-minecraft-pe-
>server/blob/master/Dockerfile
>>
>> which does not downgrade to a user but instead runs the script as root. So
>at point do we draw a line in the sand for "downloads random stuff and runs
>it" as being CVE worthy? My thoughts:
>>
>> To make it less CVE worthy:
>>
>> 1) Documents mentioning what this is doing and that it is dangerous
>> 2) Downgrading to less privileged users
>> 3) Uses HTTPS to serve the content
>> 4) Uses a well known/trusted site to serve the content
>>
>>
>> To make it more CVE worthy:
>>
>> 1) no documents/mention of what it is doing
>> 2) Runs commands as a privileged user (e.g. root)
>> 3) Uses HTTP to download content (and has no end to end signing/checks)
>> 4) Uses basically random servers nobody has ever heard of
>> 5) is widely used (e.g. for containers something in the Docker Registry)
>>
>> For example a Dockerfile from Nginx:
>>
>> https://github.com/nginxinc/docker-
>nginx/blob/11fc019b2be3ad51ba5d097b1857a099c4056213/mainline/alpine/D
>ockerfile
>>
>> TL;DR: They set the GPG key fingerprint as an env variable in the Dockerfile:
>>
>> ENV GPG_KEYS B0F4253373F8F6F510D42178520A9993A1C052F8
>>
>> They later download that key and use it to verify the nginx tarball they
>downloaded:
>>
>>             && gpg --keyserver ha.pool.sks-keyservers.net<http://ha.pool.sks-
>keyservers.net> --recv-keys "$GPG_KEYS" \
>>             && gpg --batch --verify nginx.tar.gz.asc nginx.tar.gz \
>>
>> so they are definitely trying to do the right thing (I need to confirm that this
>will actually error out during build if the key isn't available/wrong key is
>served/asc signature is bad) and assuming it works as expected (an error
>triggers the Docker build to abort) then obviously this is safe and no need for a
>CVE.
>>
>> But most containers are not doing anything like this, not even close, and I
>suspect we need to start assigning CVE's as it looks like a lot of popular
>container Dockerfiles are very insecure with how they build software.
>>
>>
>>
>>
>> --
>> Kurt Seifried -- Red Hat -- Product Security -- Cloud
>> PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
>> Red Hat Product Security contact:
>[hidden email]<mailto:[hidden email]>
>>