Reflected Cross Site Scripting - pallavitewari21/Secure-Code GitHub Wiki

Risks

Reflected XSS attacks are less dangerous than stored XSS attacks, which cause a persistent problem when users visit a particular page, but are much more common. Any page that takes a parameter from a GET or POST request and displays that parameter back to the user in some fashion is potentially at risk. A page that fails to treat query string parameters as untrusted content can allow the construction of malicious URLs. An attacker will spread these malicious URLs in emails, in comments sections, or in forums. Since the link points at a site the user trusts, they are much more likely to click on it, not knowing the harm that it will do.

Reflected XSS vulnerabilities are easy to overlook in your code reviews since the temptation is to only check code that interacts with the data store. Be particularly careful to check the following types of pages:

Search results - does the search criteria get displayed back to the user? Is it written out in the page title? Are you sure it is being escaped properly? Error pages - if you have error messages that complain about invalid inputs, does the input get escaped properly when it is displayed back to the user? Does your 404 pages mention the path being searched for? Form submissions - if a page POSTs data, does any part of the data being submitted by the form get displayed back to the user? What if the form submission is rejected – does the error page allow injection of malicious code? Does an erroneously submitted form get pre-populated with the values previously submitted? Our example hack demonstrated a maliciously crafted GET request. However, POST requests should be treated with similar caution. If you don’t protect against cross-site request forgery, attackers can easily construct malicious POST requests. And even if you do protect against CSRF, attackers will often use a combination of vulnerabilities to construct poisoned POST requests.

Prevention

Escape Dynamic Content

Web pages are made up of HTML, usually described in template files, with dynamic content woven in when the page is rendered. Stored XSS attacks make use of the improper treatment of dynamic content coming from a backend data store. The attacker abuses an editable field to insert some JavaScript code, and it is evaluated on page load.

Unless your site is a content-management system, it is rare that you want your users to author raw HTML. Instead, you should escape all dynamic content coming from a data store, so the browser knows it is to be treated as the contents of HTML tags, as opposed to raw HTML.

Escaping dynamic contents generally consists of replacing significant characters with the HTML entity encoding:

"" &#34

&#35

& &#38 ' &#39 ( &#40 ) &#41 / &#47 ; &#59 < &#60

&#62 Most modern frameworks will escape dynamic content by default – see these code samples for details.

Be even more careful if untrusted content is being inserted into <script> or <style> tags on a page. Escaping in these scenarios needs special consideration, and if your choice of tools doesn’t have stylesheet and script encoding available by default, consider using a dedicated tool.

Whitelist Values

If a particular dynamic data item can only take a handful of valid values, the best practice is to restrict the values in the data store, and have your rendering logic only permit known good values. If a URL expects a “country” parameter in the URL, for instance, make sure it is only permitted to take on one of a list of valid enumerated values.

Implement a Content-Security-Policy

Modern browsers support Content-Security Policies that allow the author of a web page to control where JavaScript (and other resources) can be loaded and executed. XSS attacks rely on the attacker being able to run malicious scripts on a user’s web page - either by injecting inline <script> tags somewhere within the tag of a page or by tricking the browser into loading the JavaScript from a malicious third-party domain.

By setting a content security policy in the response header, you can tell the browser to never execute inline JavaScript, and to lock down which domains can host JavaScript for a page:

Content-Security-Policy: script-src 'self' https://apis.google.com By whitelisting the URIs from which scripts can be loaded, you are implicitly stating that inline JavaScript is not allowed. The content security policy can also be set in a tag in the

element of the page:

<meta http-equiv=""Content-Security-Policy"" content=""script-src 'self' https://apis.google.com""> This approach will protect your users very effectively! However, it may take a considerable amount of discipline to make your site ready for such a header. Inline scripts tags are considered bad practice in modern web development - mixing content and code makes web applications difficult to maintain - but are common in older, legacy sites.

To migrate away from inline scripts incrementally, consider makings use of CSP Violation Reports. By adding a report-uri directive in your policy header, the browser will notify you of any policy violations, rather than preventing inline JavaScript from executing:

Content-Security-Policy-Report-Only: script-src 'self'; report-uri http://example.com/csr-reports This will give you the reassurance that there are no lingering inline scripts before you ban them outright.

⚠️ **GitHub.com Fallback** ⚠️