Progress in our time
also submitted via web form, it occurs to me, can I post a link to the commit instead of the data? That way you know for sure it's legitimate (but I assume then you need an extra step to download it/etc.). |
Kurt,
On Thu, 20 Apr 2017, Kurt Seifried wrote: : Progress in our time : : https://github.com/distributedweaknessfiling/DWF-CVE-2017-1000000/commit/7e1ff65791a766fb74d440ab3110ab1331032e50 As an early advocate of, and now an apparent critic of... =) Why did DWF break from the prior format? https://github.com/distributedweaknessfiling/DWF-Database/ We had per-year CSVs with the assignment info. From there we could look at the artifacts in a separate repo using the same ID. Now you are using a new repo and format: https://github.com/distributedweaknessfiling/DWF-CVE-2017-1000000 Not only do we lose the CSV, we move entirely to JSON format. While that is of obvious interest to some stakeholders, and has been discussed on list recently, that isn't necessarily immediately usable to everyone. Further, the new format means there is no central file or 'registry' to reference these. Consider what the URL above gives us: CVE-2017-1000001.json CVE-2017-1000001 3 months ago CVE-2017-1000357.json ODL CVE's 7 hours ago CVE-2017-1000358.json ODL CVE's 7 hours ago CVE-2017-1000359.json ODL CVE's 7 hours ago CVE-2017-1000360.json ODL CVE's 7 hours ago CVE-2017-1000361.json ODL CVE's 7 hours ago So we have to click each link, digest the JSON, and figure out the assignment? Compare to the previous system where a single CSV gave us a reference point, vendor, product, dates, type of vuln, and who discovered... this seems to be a step back in many ways. After several months of no new DWF assignments, while having a DWF-minted CNA in the form of an individual, that I have brought up on list because the Twitters brought it up and caught my attention... One has to wonder if DWF is losing focus from the original goal. .b |
On Thu, Apr 20, 2017 at 11:50 PM, jericho <[hidden email]> wrote: Kurt, As per a previous board call conversation: is historical data is also historical, we will at some point offer an "old style" csv view of the database, but I want to re-examine a lot of assumptions there and find some potentially better ways to do it. Now the meat of the matter: For scaling reasons I'll be sharding the data, the master DB is at: Essentially there is a CSV for each year, with links to the repos that hold the data. The repos themselves just hold the JSON files in a directory. This will help me deal with two main issues: git churn (e.g. CVE-2009-3555 with a gazillion updates) and large CVE's (e.g. ones with large files embedded within them, like proof of concepts that require a large file image or whatever).
Kurt Seifried -- Red Hat -- Product Security -- Cloud PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993 Red Hat Product Security contact: [hidden email] |
Oh and I forgot to say: the DWF will be pushing stuff to MITRE as fast as we can, so hopefully it shows up in the primary database in the usual way quickly, so people don't have to go hunting amongst various CNAs for data (they can if they want to reduce latency, but for most that won't be worth it). On Fri, Apr 21, 2017 at 9:26 AM, Kurt Seifried <[hidden email]> wrote:
Kurt Seifried -- Red Hat -- Product Security -- Cloud PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993 Red Hat Product Security contact: [hidden email] |
Free forum by Nabble | Edit this page |