HP Security Research | Cyber Risk Report 2015The graph represents the median number of scans to fix a critical or high vulnerability in agiven kingdom. As an organization, it would be interesting to verify the frequency of staticscans done on an application during its lifecycle, and in turn to calculate the number of days ittakes to verify a fix for that application. This would also give an idea of the <strong>risk</strong> assumed by theorganization during that period (this only applies to applications in a production environment).We were hoping that critical vulnerabilities would be the fastest to fix. Interestingly, this was notalways the case. One possible reason could be that most organizations tend to fix and verify allcritical and high vulnerabilities first. Hence, the developers could be prioritizing their tasks froma single bucket based on the ease of completing the task, rather than the severity of the issue.The overall data (not represented in the graph above) also showed that environment-relatedissues with lower severities were usually verified after a very long time—as much as 240scans later. This might be acceptable, because typical fixes to environment issues involvetweaking the server or application configuration toward the end of a release to meet the<strong>security</strong> standards of a production environment (while the application is developed in a testenvironment). It could also indicate that some organizations may perform multiple scans withinshort intervals, or have long development periods between releases.While issues generally get fixed after anywhere between two to 12 scans, not all of them getfixed. This could depend on the sensitivity of the application and the <strong>risk</strong> appetite of eachorganization. HPSR was interested to see if there were any patterns that stood out while lookingat the whole picture, agnostic of the application and the organization. Below is the chart thatprovides this data.Figure 34. Issues fixed by kingdom; higher numbers are betterIssues fixed per Kingdom Critical High Medium Low100%100%% of issues fixed90%80%70%60%50%40%30%64%57%43%36%84%81% 82%75%55%67%70%52%32%72%90%68%61% 61%59%57%44%35%30%52%Total issues fixed: 68%20%17%19%10%0%API abuseCode qualityEncapsulationEnvironmentKingdomsErrorsInput validationand representationSecurity featuresTime and state64
HP Security Research | Cyber Risk Report 2015Ironically, misused <strong>security</strong>features continue to be the trouble areafor both Web and mobile apps.In a typical <strong>security</strong> development lifecycle, it is recommended that corporations have practicesin place for identifying weaknesses in their applications. As part of that lifecycle, it is theresponsibility of the team to verify that known vulnerabilities and any close variants have beenmitigated. In studying the data from the past year, we have observed that for those applicationsthat are following a <strong>security</strong> development lifecycle, where applications are analyzed more thanonce, over 68 percent of vulnerabilities are being resolved.Overall, the percentage of issues fixed in the Security Features, Encapsulation, and CodeQuality kingdoms are relatively high. It is also interesting to note that in the Input Validation andRepresentation kingdom, where the more well-known vulnerabilities such as cross-site scriptingand SQL injection reside, the percentage of critical and high issues fixed are similar. This couldimply that such injection issues are prioritized and fixed together during implementation.ConclusionKnowledge is power; being aware of the specific circumstances that give rise to vulnerabilities lets<strong>security</strong> practitioners address their root causes—even, if healthy secure development practicesare followed, before they are committed to code and released to an unsuspecting world.For a snapshot of the state of application <strong>security</strong> in 2014, we analyzed a sample set of <strong>security</strong>audits performed by HP Fortify on Demand on 378 mobile apps, 6,504 Web apps, and 138Sonatype <strong>report</strong>s from 113 projects. These audits include results from static, dynamic, andmanual analysis. All identified issues were classified according to the HP Software SecurityTaxonomy (originally the “Seven Pernicious Kingdoms”), updated and refined in mid-2014 andunder continued expansion in 2015.The good news, if there is good news to be had, is that the uproar around such high-profileincidents as Heartbleed and Shellshock may yet lead software developers and architectsto tackle <strong>security</strong> issues, particularly those in foundational or legacy code, more effectively.We saw that repeated scans lead to better software, and that the open-source developmentcommunity seems to be selecting for healthier, better componentry—components with more<strong>security</strong> issues simply aren’t used as often by other developers.That said, vulnerabilities are still pervasive. Sorting the vulnerabilities we discovered intothe categories of our taxonomy, we found every indication that more Web and mobile appscontained discoverable vulnerabilities. Ironically, misused <strong>security</strong> features continue to be thetrouble area for both Web and mobile apps. There were a few differences endemic to each typeof application; the nature of mobile apps means they’re particularly prone to code quality issuesrarely found in Web apps, while Web apps suffer from the types of problems covered in theerrors section of our taxonomy.Progress is possible. Despite the rise in detected vulnerabilities, the very fact that they aredaylighted means that they can be analyzed and addressed. Our <strong>research</strong> indicates that criticalclassvulnerabilities are taken seriously and given patch-development priority—not a surprisingfind, when one remembers that post-release patching remains part of a healthy developmentlifecycle, but perhaps a sign that smart practices truly do take hold and crowd out lesser habitsin the <strong>security</strong> ecosystem.65