This blog is a recap of the 2018 ICS Year in Review webinar hosted on March 13th.
Webinar Overview: Sr. Vulnerability Analyst Reid Wightman and Intelligence Analyst Selena Larson from the Dragos Threat Intelligence team overview the 2018 Year in Review: ICS Vulnerabilities report and discuss Dragos’ key findings from public advisories reports throughout the year.
Below are questions from webinar attendees with answers from Reid and Selena:
Q: How do you rank/rate errors? Are these errors in tone/severity or type/classification etc.?
Dragos identifies errors in the Common Vulnerability Scoring System (CVSS) vector and numeric score. Sometimes the severity of the vulnerability is misstated, or the description itself is incorrect. For instance, common inaccuracies we see reported are whether an attack vector is network or local, and whether user interaction is required. (AV:L, AV:N) (UI:R, UI:N)
Q: Could you please elaborate on the error in reporting? Are the errors related to e.g,. recommended mitigating factors?
A: Errors correlate to errors in CVSS scores. We do identify other issues in advisories, too, such as lack of mitigation advice, conflicting advice, even incorrect identification of the system type (examples include advisories that say “Update FIRMWARE to version…” when the vulnerability is in PC software and not firmware). But those mitigation and other issues don’t count as errors.
Q: Did you see many/any advisories include or reference custom SNORT rules as an added mitigation where appropriate?
A: Some advisories do, yes. These are typically Cisco/TALOS research that include Cisco IDS rules. A lot of times these rules aren’t publicly available, you’ll need a commercial Snort feed. These are most common with research that is actually done by TALOS.
Q: Do you anticipate lower error rates in CVEs if these findings are reported using the IVSS method vs CVSS method?
A: We’re not sure. There isn’t an agreed-upon standard for IVSS still. If we had a very simple version of IVSS I do think there would be lower error rates, but it will remain to be seen if the industry can rally around a simple standard.
Q: Any feel for Geo breakdowns? Are NA, EMEA or APAC vendors more/less accurate or more/less likely to self-report vulns compared to the others? Siemens seems proactive.
A: Nothing that is backed up by numbers. Certainly Siemens’ self-reporting and other advisories skews things towards the Middle East.
Q: How quick does DHS update advisories when you point out problems?
A: While we used to point out problems to ICS-CERT, we haven’t recently. We tend to report problems to the original researcher, get verification that the score is, indeed, incorrect, and leave it to the researcher to correct the public advisory. We have yet to see a correction in a public advisory using either method.
Q: Is ICS-CERT’s quality as good as other CERTS, or are vendors on the hook to make sure these are good?
A: The results are a bit too skewed towards one vendor to say for sure, but my feeling is that vendors should absolutely get involved if they want accurate reports. The numbers all point to lack of vendor involvement in the disclosure process results in worse advisories.
Q: How do the ICS-CERT advisories differ from the information being disseminated by the ISACs?
A: Dragos does not have access to any ISAC-specific advisories.
Q: Would you attribute the changes in CVEs released more toward reaction to exploits of them or proactive due to change in vendor diligence or response to researchers?
A: The proactive research done by vendors is probably being done by trained teams, and is probably the reason for more accurate advisories. While it’s impossible to prove this from our position, our guess is that yes, reaction to exploits or other vulnerability reporting might result in less accurate advisories simply because the vulnerability lands with a team that, on the product side, may be inexperienced at dealing with reports of security issues, and on the product CERT side, may be unfamiliar with the actual product. Couple this with a time constraint in the ‘reaction’ case, and you get a less accurate report. In proactive hunting, time is on the side of the vendor to better understand the bug and its ramifications.
View the slides and full webinar here: