• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Aaron Weiss

  • Home
  • Blog
  • About
  • Contact

freenas

Learning Graylog for Fun and Profit

September 30, 2020 by Aaron Weiss

Since I’ve been increasing my knowledge running my own VPS and VM servers on my FreeNAS/TrueNAS machine, I’ve learned the importance of logs.

Whether it’s a website, application, or operating system, most likely, they are producing various forms of logs. These logs hold the clues to generic operations and errors that can help you understand how well or poorly the application is working.

The problem is that these logs are constantly being generated and are not the most human-friendly. Operating systems alone generate tens of thousands of lines of logs. Keeping up with these logs is difficult. That’s where logging software comes in.

Of course, no conversation about logs cannot begin without mentioning Ren and Stimpy:

How I Got Into Logging

At work, we use software called Splunk. At first, I could barely get a sense of what to do. This was because I didn’t understand our company’s infrastructure, and I was only given a query to execute to complete my specific task.

Later on, I got the idea of using this software for another project. After reaching out to areas who dropped this application into my lap for help, I got no replies and I was at a standstill. It didn’t stop my motivation to learn how to approach my goal because I found lots of potential in having this information.

Splunk is available for free, but has some limitations:

  • Alerting (monitoring) is not available.
  • There are no users or roles. This means:

    • There is no login. You are passed straight into Splunk Web as an administrator-level user.
    • The command line or browser can access and control all aspects of Splunk Free with no user and password prompt.
    • There is only the admin role, and it is not configurable. You cannot add roles or create user accounts.
    • Restrictions on search, such as user quotas, maximum per-search time ranges, and search filters are not supported.
  • Distributed search configurations including search head clustering are not available.
  • Deployment management capabilities are not available.
  • Indexer clustering is not available.
  • Forwarding in TCP/HTTP formats is not available. This means you can forward data from a Free license instance to other Splunk platform instances, but not to non-Splunk software.
  • Report acceleration summaries are not available.
  • The Free license gives very limited access to Splunk Enterprise features.
  • The Free license is for a standalone, single-instance use only installation.
  • The Free license does not expire.
  • The Free license allows you to index 500 MB per day. If you exceed that you will receive a license violation warning.
  • The Free license will prevent searching if there are a number of license violation warnings.

The majority of these limitations aren’t terrible. However, the first bullet item is where I have an issues: “Alerting (monitoring) is not available.” To have emails be sent to you if a event matches a rule that you create is one of the reasons why logging software is so powerful.

Therefore, I found Graylog as a possible solution. It allows for a similar interface as Splunk, notifications, and is extensible through plugins and content packs. Best part: It’s Open Source.

There are many available open source logging platforms, but Graylog was the first that i tried, and appeared to work well for me. The installation process is quite extensive. You will need to have reasonable familiarity with the command line, and I found you’ll need to have a minimum of 4GB of RAM to have a decent performance.

I installed Graylog initially in January 2020 using both the Documentation and a decent YouTube Tutorial which provided more instruction. It required me to install Java, MongoDB, ElasticSearch, and then Graylog. This stack is the backbone of Graylog, and is the reason why you need at least 4 GB of RAM. I first installed this on a VM in my TrueNAS/FreeNAS machine.

Having Graylog installed isn’t enough, you need to send logs to Graylog. Every system and application produces logs in different ways, and there are various ways to sent logs to Graylog. That is where I got the most hung up on working on this project.

Created a UDP Syslog format

The installation instructions from above help ingest rsyslogs to Graylog. Rsyslogs are a common logging format for Linux systems. I found configuring rsyslogs to send to Graylog on Linux rather simple and easy to understand the more I did it for my VMs and RPis.

Apache logs

What really took me some time was figuring out how to send logs from Apache servers. After lots of digging, I found some two older blog posts (here and here) about how others had did it years prior, and I finally figured out how to do it myself.

After some time, I was able to put together the following:

LogFormat "{ \"version\": \"1.1\", \"host\": \"%V\", \"short_message\": \"%r\", \"timestamp\": %{%s}t, \"level\": 6, \"user_agent\": \"%{User-Agent}i\", \"source_ip\": \"%a\", \"duration_usec\": %D, \"duration_sec\": %T, \"request_size_byte\": %O, \"http_status_orig\": %s, \"http_status\": %>s, \"http_request_path\": \"%U\", \"http_request\": \"%U%q\", \"http_method\": \"%m\", \"http_referrer\": \"%{Referer}i\", \"from_apache\": \"true\" }" apache_prod_greylog
CustomLog ${APACHE_LOG_DIR}/prod_ssl_apache_gelf.log apache_prod_greylog
CustomLog "| /bin/nc -u 192.99.167.196 1514" apache_prod_greylog

The first line creates a custom LogFormat.

The second line outputs that format to a new log file in the Apache log directory.

The final line pipes the format to a netcat command that sends the data to an IP address at a specific port. Note: IP Address is no live

Apache logs aren’t created unless there is traffic sent to the web server. After visiting the site on the same server that Graylog is on, it didn’t take long to see the data ingested in Graylog. This gave me the idea to continuously add more information to the Apache logs to suit my needs. I used the official Apache Log format documentation and some Loggly information in order to adjust the log formats to my liking.

Sending Remote Logs to Graylog on the Same Network

Now that I have both Linux system logs and Apache logs working, I replicated these steps on all my VMs, and Graylog was now obtaining logs from several VMs and two Raspberry Pis.

Sending Remote Logs to Locally Hosted Graylog

Once I had my own proof-of-concept Graylog instance running within my local network, I felt comfortable wanting to have the logs generated from this very site and server hosted at Digital Ocean sent to Graylog. This would require me to open my local network to the internet to reach my VM that housed Graylog.

This was another headache.

There’s lots of information on how to do this. Essentially, you would port forward traffic to your IP address provided by your ISP. That seems simple with most routers. However, my ISP provides Dynamic IP addresses that updates once-in-a-while. That’s a simple workaround with Dynamic DNS, where software check frequently to see if your IP address to the your ISP changes, if it does, it updates the software.

Well, that’s where I really got stuck. It turns out that that my internet is behind what is called a Carrier Grade NAT or CGNAT. There are a finite amount of IP address throughout the world, and the availability of these IP addresses is shrinking. To accommodate the amount of IP address an ISP may have, they may place certain neighborhoods behind a CGNAT.

The concept is similar to having a router in your home. A router creates a network for all the machines on that network that includes an IP address and creates new IP addresses for each machine that connects to the router. A CGNAT does the same thing for certain neighborhoods. Therefore, my home internet was assigned an IP address within the neighborhood network, and the neighborhood network that I’m a part of has an single IP address that broadcasts to the rest of the world. This means that port forwarding and dynamic DNS was not available.

This is where I recognized that I had hit another wall. I did find another option called ngrok, but it too was didn’t work the way I would like. After looking my known options, I chose to pack up my Graylog project for the time being.

Learning Splunk and New Motivation

I finally got mentorship with Splunk and after playing with it more, I was able to approach my goal at work and saw more possibilities with logging that reignited my interest with Graylog.

Since I was now a full year into managing a VPS for aaronweiss.me, I felt it might be an excellent opportunity to launch another VPS with the sole goal of logging. However, I knew I needed more RAM than my current little $5 droplet at Digital Ocean. To have a droplet with a minimum of 4 GB of RAM would be $20 per month. I felt that was too steep for a little project like this, which led me on a journey to find a VPS that had the resources I needed at a reasonable price.

Looking for low-priced VPS with that amount of RAM is not difficult, but there are companies you need to vet as they could be fly-by-night operations. I had located Hetzner and OVH first.

Hetzner has extremely low-cost VPSs available. A Single Core 4 GB VPS would be 5.68 euros which was roughly $6.73 per month based on the currency rate at the time of publication. Hetzner’s servers are located in Germany and Finland. Given that I’m solely concerned about logs, I didn’t need low latency, and this would have been okay.

I had also found OVH, which is a French company with servers located world-wide. I found they had a server Quebec for $10.58 with 2 vCPU and 4GB of RAM. I chose to start off with this company. After about a day of setting things up, I was able to ingest logs from my own website, VMs, and my Raspberry Pis and it was working very well.

But I wanted to reduce that cost even more. I finally found VPSDime which offers a $7 VPS with 6 GB of RAM and 4vCPUs. These resources at that price was suspicious to me, but after due diligence, the amount strong service and support reviews, I thought I’d try it out. The extra resources make a huge difference in speed. When I would restart Graylog or any portion of the stack on my VMs, it could take about 5 minutes to load. OVH took about 4 minutes to load. VPS Dime takes about a minute or less.

Support was great when I had an issue. Surprisingly, it wasn’t my issue or theirs. It was the lovely CenturyLink outage that occurred on August 30th, 2020.

Admittedly, OVH nor VPSDime’s interfaces were not nearly as intuitive as Digital Ocean, but I was able to navigate through VPS Dime just fine.

Monitoring and Events: My Use Case for Graylog

As I stated earlier, one of my primary goals of creating this logging infrastructure was to be able to have notifications sent when certain conditions trigger.

For time to time, I was getting an “Error Establishing a Database Connection” from WordPress on this website. Since I don’t go to my own website often, this error and downtime can occur for days. Unsure of when and what caused this, I had a difficult time finding the MySQL error log to see what cause the error. Luckily, restarting MySQL quickly restarted MySQL which brought the website backup in less than 10 seconds.

Among the reasons for this error are:

  • Incorrect database credentials
  • Corrupted database
  • Corrupted files
  • Issues with the database

Once I got Graylog up and running, I created an alert for the website any time a 500 error occurred that would email me. Finally, in late September, I received several dozen emails from the previous 8 hours while I was sleeping from Graylog stating that there was a 500 error. Lo and behold, my website was down with the “Error Establishing a Database Connection” notification from WordPress.

After I restarted MySQL, I found the first time this error occurred in Graylog, then found the MySQL log file and time it occurred. The error stated:

2020-09-26T06:02:14.370246Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).

A quick search pointed me to a Stack Overflow question and answer showing how to enable explicit_defaults_for_timestamp in MySQL. So now, it’s just waiting if this Database Connection occurs again. When it does, I’ll have the tools to search, discover, and investigate again.

Update: The issue wasn’t just the explicit_defaults_for_timestamp, it was also because my system needed a swap file. Despite the warnings that swap files can contribute to faster SSD degredation, I followed this Digital Ocean tutorial for Ubuntu Server 18.04 for create a swap file. Since then, there have been no MySQL failures.

The Future

Logging has been fun and I have a better understanding of how I can monitor the logs that each of my servers and websites produce. I still need to figure out how to get PHP and MySQL logs sent to Graylog, but I’m sure I’ll overcome that obstacle in due time.

Logging provides a smoking gun to why something has occurred, it may not provide the best understanding of what is running or other stats. That is where Nagios and Grafana come in to provide status monitoring and statistics and graphing.

Additionally, as I discovered in this journey, I’d prefer to run this Graylog instance on a Virtual Machine at my home, rather than spend another $7 per month for a VPS. I’ve looked into the use of a remote VPN server that will circumnavigate the CGNAT for a direct connection to the VM. That will ultimately provide more features than just allowing my remote VPS hosting aaronweiss.me to point to my VM. I could also allow this VPN to connect to other portions of my network such as accessing my Plex installation or using the same VPS server to run PiHole to block ads. a VPN server doesn’t need the same resources as Graylog, and could cost less and even us Digital Ocean if I wish.

Filed Under: Website Administration Tagged With: elasticsearch, freenas, graylog, logging, truenas, wordpress

Using Raspberry Pi For PiHole Ad Blocking and Network UPS Tool Monitoring

September 5, 2020 by Aaron Weiss

When I first built my FreeNAS machine, I bought an UPS device to give it backup power in case of a power outage. But my primary computer also had a UPS device.

Recently, in order to minimize the amount of equipment that I have, I decided I only wanted to have one UPS. After searching for solutions, I was pointed to Network UPS Tools (NUT), which is open source software that can manage UPS units and the machines they power. This way I can have my primary computer and NAS connected to the same UPS unit, and control them both during certain power events.

Before this, I also started to run PiHole on a VM with my NAS, but I quickly found two issues. First, if the VM halted for some reason, my entire network would too. Also, if I needed to restart my NAS for any reason, the network would also go down.

This finally gave me two solid reasons and goals to justify the purchase a Raspberry Pi. Luckily, the Raspberry Pi 4B 2GB model had just become the primary model at $35, so I knew I’d have plenty of performance to run these two applications.

Since I was not only looking to minimize my homelab’s physical footprint, I also wanted to keep cords and cables to a minimum. So I also decided to get a Power-Over-Ethernet Switch that would allow me to power my Raspberry Pis with just an ethernet connection. I also wanted to reduce the amount of Ethernet wire lengths to make cable management easier, so I purchased a Cat 6 crimper to create my own ethernet cable lengths.

Finally, this new RaspberryPi would allow me to run cron jobs that control scripts on remote machines from the Pi as well such as  my Digital Ocean Snapshot script.

So to sum up my goals:

  • Run NUT server to monitor my NAS and Windows 10 machine
  • Run PiHole
  • Execute cron jobs

Network UPS Tools Equipment

Below are all the items I purchased for this project.

ItemQuantityPriceStore
Raspberry Pi 4 2GB2$70PiShop
Rasberry Pi Case1$5PiShop
POE Hat2$41.90PiShop
16GB SD Card3$17.97Amazon
High Pi Case1$9.95PiShop
SD to USB Card Reader1$9.99Amazon
Total$154.81

The above doesn’t include shipping or the cost of a mini HDMI cable as I already had one lying around. There’s three SD cards in this list to have one as a backup.

First Raspberry Pi

The Pi and its accessories are incredibly small and easy to put together. The POE switch worked perfectly. I was able to install Raspbian on a SD card and start up the Pi with no issues with the Raspberry Pi Foundation’s Imager Tool. Once I got the basic settings just so, including assigning a static IP address. I disconnected the HDMI and connected via VNC instead, using VNC Viewer.

I typically name my devices after robots from film history. My FreeNAS is named Maria after the female protagonist in Metropolis, and I’ve maintained that naming convention for many of the Virtual Machines, networking equipment, and now my Pi. I named it Worker11811 after the character whose job was to maintain the Moloch machine.

Installing PiHole

This is an easy step, especially since I had already been running it in a VM. You simply run this code:

curl -sSL https://install.pi-hole.net | bash

That is the basic install and runs your through a few configuration options such as choosing your DNS host (I chose Cloudflare), your static IP address, etc.

PiHole Teleporter
Backing up a PiHole and moving it to another installation is easy with the PiHole Teleporter. Click to enlarge.

Since I already had custom rules and settings from my VM, I used PiHole’s Teleporter to download the settings from the original VM, and then restore it to the newly installed PiHole.

I updated my Router’s DNS settings to point to 192.168.1.3, and boom, PiHole was working as expected.

Installing NUT

The Pi will act as both the NUT server and client server as the Pi will act as both. First, I connected my UPS to my Pi and ran lsusb to ensure it could see the UPS.

pi@worker11811-pi:~ $ lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 0764:0501 Cyber Power System, Inc. CP1500 AVR UPS
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

No issues there. It sees my CyberPower UPS.

Installing NUT isn’t difficult on Raspbian as there is a Debian package available, and found a few excellent instructions like this one from Medium:

apt-get update && apt-get install nut nut-client nut-server

Next, I added the following lines to the sudo nano /etc/nut/ups.conf file. I found my USB driver for the UPS using this Hardware Compatibility List. It was the same driver that my FreeNAS was using, so I knew it worked well.

[ups]
driver = usbhid-ups
port = auto
desc = "CyberPower 1350PFLCD"

Next, I created users for each of the clients that will be connecting to this NUT server by editing my /etc/nut/upsd.users file:

[admin]password = ******
actions = SET
instcmds = ALL

[upsmon_worker11811]
password  = ******
upsmon master

[upsmon_maria]
password = *******
upsmon slave

[upsmon_johnny5]
password = ******
upsmon slave

As you can see, there are 4 users. The Admin, worker11811, maria, and Johnny5 (which is my primary Windows 10 machine).

Then I started the nut-client and drivers:

systemctl restart nut-client.service
upsdrvctl stop
upsdrvctl start

Next, I checked to see if I can see the UPS device via the NUT

pi@worker11811-pi:~ $ upsc ups@192.168.1.3
Init SSL without certificate database
battery.charge: 100
battery.charge.low: 10
battery.charge.warning: 20
battery.mfr.date: CPS
battery.runtime: 1470
battery.runtime.low: 600
battery.type: PbAcid
battery.voltage: 24.0
battery.voltage.nominal: 24
device.mfr: CP1350PFCLCD
device.model: CRCA102*AF1
device.serial: CPS
device.type: ups
driver.name: usbhid-ups
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.parameter.synchronous: no
driver.version: 2.7.4
driver.version.data: CyberPower HID 0.4
driver.version.internal: 0.41
input.transfer.high: 139
input.transfer.low: 88
input.voltage: 123.0
input.voltage.nominal: 120
output.voltage: 139.0
ups.beeper.status: disabled
ups.delay.shutdown: 20
ups.delay.start: 30
ups.load: 22
ups.mfr: CP1350PFCLCD
ups.model: CRCA102*AF1
ups.productid: 0501
ups.realpower.nominal: 810
ups.serial: CPS
ups.status: OL
ups.test.result: Done and warning
ups.timer.shutdown: -60
ups.timer.start: -60
ups.vendorid: 0764

Additionally, you can run upslog which will check the UPS’s status every 30 seconds:

pi@worker11811-pi:/usr/local/bin $ upslog
Network UPS Tools upslog 2.7.4
logging status of ups@localhost to - (30s intervals)
Init SSL without certificate database
20200905 115423 100 123.0 20 [OL] NA NA
20200905 115453 100 123.0 23 [OL] NA NA
20200905 115505 100 123.0 23 [OL] NA NA
Signal 2: exiting

Lastly, we can check to see if NUT sees the same machine as a client:

pi@worker11811-pi:~ $ upsc -c ups
Init SSL without certificate database
127.0.0.1

The 127.0.0.1 stands for the home machine which is my worker11811 Raspberry Pi. We’ll be using the same command a few more times later.

Updating FreeNAS UPS Settings to become a NUT Client

Now that NUT is working, I had to update the FreeNAS settings. FreeNAS comes with NUT already, making this super easy to update. I made the following changes:

  • Switched UPS Mode from Master to Slave (I hate this terminology)
  • Added the Remote Host IP to 192.168.1.3 which is my Pi’s internal network address
  • Added the Monitor Username and Password
FreeNAS UPS Settings
Click to enlarge.

After saving, I immediately noticed that my FreeNAS recognized the connection to my NUT Server on my Pi via the shell.

To double-check, I ran the same upsc ups@192.168.1.3 command on the FreeNAS shell, and received the same results when I initially ran it on the Pi. Of course, we’ll run the same client command as before:

pi@worker11811-pi:~ $ upsc -c ups
Init SSL without certificate database
192.168.1.9
127.0.0.1

Now it sees my FreeNAS IP address so I’m in good shape here.

Adding Windows 10 as a NUT Client

This is where the real headaches occurred for me at first. Windows support is severely lacking. I was only able to find an ancient winNUT Google Code repository and a MSI installer directly on the NUT site. After attempting to get both to work and nearly a week of frustration, I uninstalled both.

I stumbled upon this Github repo called WinNut-Client which has a super-simple interface to setup your Windows machine connection to your NUT server. After downloading the MSI file, installing, and setting up the config I had my Windows machine capturing the UPS data and was able to setup the settings to choose when and how I want this machine to shutdown. There’s even a hibernate option which I prefer over a full shutdown which I prefer if I’m away from my computer.

WinNUT Client Options

You can also choose how the service starts at the machine’s startup, how often it looks for updates, etc. What I didn’t see is the client appear on my NUT Server, so I opened an Issue on the Github repo to get clarification.

WinNut Client Status
Click to enlarge.

Monitoring the UPS with NUT

NUT comes with several CGI programs that can be viewed in a browser. However, I had issues attempting to log into http://192.168.1.3/cgi-bin/nut/upsstats.cgi. After some time, I realized that PiHole uses Litehttp web server that already uses port 80. Wherease, NUT uses Apache. I updated /etc/apache2/ports.conf and changed Listen 80 to Listen 8080 and restarted Apache. Going to http://192.168.1.3:8080/cgi-bin/nut/upsstats.cgi brought up a result that reminded me of the internet’s nascent days:

NUT Monitoring Page
Example of the Network UPS Tools Stats Page. Click to enlarge.
UPS Stats Chart
Example of the individual UPS stats. Click to enlarge.

Regardless, my UPS’s stats were easily revealed to me.

Issues

Temperatures

The 4th generation Pis run rather hot, more so with the POE hat, which even comes with a fan that sits right on top of the CPU. When inside the official case, the temperatures were reaching the high 70s in Celsius. This was quite worrying. I ended up purchasing the HighPi case which is meant to provide extra room for taller hats. I figured this would be an excellent way to hopefully get more air to the Pi. This ended up reducing the temperatures, but not to where I really prefer.

I was able to find instructions to configure the Pi to manage the POE fan speeds to keep the fan from being on all the time. Otherwise, there’s a high-pitched whine. Of course, as I was testing this, I had to restart the Pi each time. So there’s a balance between the speed of the fan and the temperatures that I feel are appropriate. The goal is generally to keep the CPU below 80 degrees as that is when the CPU begins to throttle performance.

Network downtime with reboots

The first issue I encountered was that if I need to reboot my Pi when I was working on it or installing generally updates, it would take down my network for about a minute. This was essentially the same problem I experienced prior to this project. So, I really hadn’t fixed this problem.

Second Raspberry Pi

The downtime issues is where I realized that I’d prefer to have a second PiHole machine in order to have 100% uptime. Therefore, I purchased another Pi and POE hat, but chose to remain with the official Pi case I already had. To continue the Metropolis naming scheme, I named this one Rotwang after the primary antagonist.

I ran through the same processes as before, except I chose not to run a graphic interface with this Pi, using just SSH to manage the device. After I installed PiHole, and ran the Transporter process to have mirror settings through the web interface.

I had found this Reddit post of someone sharing their script on how to keep two PiHoles synced. I don’t believe it works that well, but essentially, if one Pi goes down or is rebooted, I still have a network connection.

While writing this post, I recently stumbled upon this post about sycning two PiHoles, that might be a better option than what I have now. I expect to try it out sometime in the future. (EDIT/UPDATE [Sept. 7th, 2020]: This script works as described and is super easy to install and run.)

Backing up the Pis & Data Recovery

One thing I learned quickly is that the Pis don’t have a power off button. So if they lose power abruptly without powering down, it could corrupt the SD Card data.

I found this super-simple Pi backup script on GitHub, and it’s been working like a charm. I have these backups sent to their respective folders on my FreeNAS, so they are always in a safe place.

Even recently this has been helpful. In attempting to make some changes to one of the Pis, it wouldn’t reboot. Instead of getting mad, I was able to write the backup image to the backup SD card and reboot with no issues.

Conclusion

Entering into the Raspberry Pi universe has been fun and I see why these little guys are so popular. Anytime I start to find myself relying on a Virtual Machine in my FreeNAS machine, I start to consider whether or not it’s better to run that application on one of these Pis or add an additional one.

I still worry about the temperatures for both Pis, which are running in the low 70 degree Celcius range. After removing both Pis from their cases for a few days and found them to operate within the 50 degree range, which is much more reasonable to me. I am considering a Cloudlet case such as this one  where I can have all the Pis setup in an array with plenty of airflow and the ability to run a POE switch near by.

Worker11811 Temperature
Rotwang Pi Temperatures

Overall, I’m super happy with this setup, and I recognize that I certainly over-engineered this entirely. I spent over $150 just to have one less UPS near my gear. Honestly, I could have just added a Serial Connect from the UPS to the NAS and maintained the USB connection to my Windows machine. I’ve learned a lot and I’m happy with what I’ve accomplished.

Filed Under: Projects Tagged With: freenas, nas, network ups tools, networking, pihole, raspberry pi, winnut

How FreeNAS and WP-CLI Grew My Interest in Linux and Automation

April 6, 2020 by Aaron Weiss

Last year, I built a FreeNAS server. Initially, it was only meant as a means to store my computer backups and house my music and videos.

However, to do it right, meant I needed to perform commands in the shell, mostly to test the hard drives before I began to store files on them. I found an excellent resource, but I didn’t know what any of commands meant. I executed them and waited until they were done.

The same was for Bash scripts to automate system configuration backups, reports, and notifications.

It was when I stumbled across a some YouTube videos on how to run an Ubuntu Server to host your own websites did I finally test the Virtual Machine waters FreeNAS offered. I installed  Ubuntu 18.04 Server LTS on a VM, and learned a little at a time. The idea that I could learn a new operating system without buying another computer floored me.

Setting Goals

With VMs, CLI, and some basic web server understanding under my belt, I was ready to take a leap and move aaronweiss.me to a Digital Ocean server, but with the following goals:

  1. Separate WordPress Environments:
    • Development (DEV): Any new plugins, theme enhancements, or other changes that would affect the WordPress installation or how the software worked would be developed and tested on WordPress installation.  Plugin, theme, and core updates would also be completed and tested on this server.
    • Quality Assurance (QA): This environment was meant to test any changes made in the DEV environment as if it were a functional website. No changes would be made to this environment except common WordPress functions such as adding and managing posts and pages.
    • Production (PROD): This would the live website visible to the public. Like QA, major changes would not be made on this environment.
  2. Automated Deployment Scripts: Deploy changes from DEV to QA and then QA to PROD
  3. Maintenance Scripts: Create a script to check for security vulnerabilities, cleanup temporary files, backup site, optimize database, and compress images on all three environments.

The above goals meant I could successfully, host, develop, and maintain my website using a secure approach with lots of ways to quickly get up to speed if something were to happen.

Additional Achievements Unlocked

Once I achieved these goals, I was hooked on what else I could do. My next set of goals were:

  • Create an automated Digital Ocean snapshot script. Digital Ocean has a backup options, but only does so once per week. That didn’t fly with me, so I wrote DOCTL Remote Snapshots as a way to have some control of how often and how many snapshots would be created.
  • Learn GIT – I’ve had some Git knowledge through Microsoft Team Foundation Server at work. However, it was time to really learn Git. I combined this with my DOCTL Remote Snapshot script and now have a published repository.

Next Up:

  • Create a website monitoring script. I don’t need server up time, I need to know website up time. I want to know that my website can fully load and perform its basic tasks throughout the day.
  • Build a Raspberry Pi and install:
    • PiHole. PiHole is an free, open source ad blocker.
    • NUT (Network UPS Tool). The goal of this is a script to monitor two computers from Raspberry Pi and shut them down gracefully using one Uninterruptible Power Supply. I currently have two UPSs, one for my primary computer and one for my FreeNAS. The primary one can handle up to 850 watts which is enough to cover all my devices, but only has one UPS port to monitor the primary device. Ideally, NUT will allow monitoring over Ethernet and can handle the shutdown of both machines.
    • Additionally, these two programs also feed my yearning to want to build and learn Raspberry Pi.

These are some short-term goals that I think are obtainable for the future.

Filed Under: Website Administration Tagged With: Digital Ocean, DOCTL, freenas, linux, ubuntu, virtual machine, wordpress

FreeNAS: A Hero’s Journey

July 5, 2019 by Aaron Weiss

Building and maintaining a FreeNAS server in my home has been a rewarding experience that continues to provide me with value and an avenue to grow personally and professionally.

It’s an experience generally started with loss, some stability, neglect, some more loss, stress, and then eventually some stability and growth.

Backstory

I bought my Netgear ReadyNAS NV+ in 2010 after I suffered my very first HDD failure where I lost hours of video footage. Luckily, it was archival video, nothing urgent. The scenario left me yearning to never let it happen again.

I learned all about the available redundancy options for storage and backup, and set my sights on Netgear’s X-RAID feature with their ReadyNAS platform, allowing for 1 of 4 hard drive failures. I thought that would be enough for my budget at the time.

Around 2017, I received a notice that the machine would no longer be supported with software updates. Great. It was bound to happen. On a tight personal budget, I looked for new options but never committed to anything.

Inciting Incident

A late-Summer 2018 Florida thunderstorm sent my dog into a static-electricity induced panic attack, including heavy panting and a neediness where she was attached to my leg with imaginary heavy-duty Velcro.

Despite the many UPS units around my home, I failed to connect the coaxial cable that carried my broadband internet connection to the UPS. A lightning strike hit a nearby coaxial terminal that fed surrounding apartment buildings their internet. The surge carried itself to my modem which let out a internal thunderclap that also sprinkled its way through two additional WiFi routers, which were also destroyed in the surge.

The last in the chain was my ReadyNAS which was hit too. I was able to power on the device again, and after a file system check, it booted. But no network connectivity. The NIC was fried.

Call to Adventure

After the disappointment faded, action needed to take place. I needed to:

  1. Get access to my data.
  2. Find and implement a new solution. I narrowed my options to:
    1. Buying a more modern ReadyNAS device
    2. Building a DIY NAS using FreeNAS or other similar software

I had found similar instances of users’ NIC ports getting hit by surges and recoveries possible on the same device and model. It turns out that ReadyNAS’s X-RAID solution is just a Linux-based RAID with some bells and whistles, and there were many ways to retrieve this information. However, this would have required the need to build a separate machine that could connect to all 4 of my drives. Therefore, building a new machine would be the eventual course of action.

Crossing the First Threshold

It turned out that one could purchase a used ReadyNAS of the same model. Finding a version that matched my version was difficult, and often times I waited too long to bid before an auction was purchased from under me. After a few weeks I finally found a compatible version on eBay.

The moment I was able to re-install all my drives, boot, and see my data appear was a million pounds of stress off my shoulders. My data was safe, but I need a new solution to backup my main computer as soon as I could.

While I did have a great experience with ReadyNAS and Netgear products in general, I felt that I needed personal and professional growth, and FreeNAS would be my choice. I’ve built many computers in my life, but this would be the first that was specific to being a server and maintain it.

Land of Adventure

After nearly a month of research and deciding on a budget and components, I chose the following:

  • SUPERMICRO MBD-X9SCM-F-O Server Motherboard. This motherboard is now discontinued, which helped get it at a reasonable price.
  • Intel Pentium G2140 3.30GHz Processor BX80637G2140. A simple dual-core ECC-compatible processor
  • 3x Kingston ValueRAM 4GB 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 Server Memory Model KVR1333D3E9S/4G. They say 1GB of RAM for each terabyte of data.
  • 4x White Label SATA 6 GB/s 3TB Hard Drives. These White Label drives reduced my price considerably, but as you’ll later learn, it was not the best choice.
  • BitFenix No Power Supply MicroATX Tower Case BFC-PHM-300-KKXKK-RP. I thought this would be an excellent low-profile case. Wrong.
  • EVGA 450 B3, 80+ Bronze 450W, Fully Modular, EVGA ECO Mode.  I simple somewhat eco-friendly power supply.
  • 2x SanDisk Ultra Fit 32GB USB 3.0 Flash Drive – SDCZ43-032G-GAM46 . The 2 USB sticks house the FreeNAS OS and are mirrored for boot redundancy.
  • CyberPower CP825LCD 450W. A dedicated UPS for the new NAS.
  • 50cm 10Pin Motherboard Female Header to Dual USB 2.0 Adapter Cable. Help connect the case cable to the motherboard
  • 19-Pin USB3.0 to USB2.0 Adapter Header Cable. Connect case’s USB 3.0 power buttons to USB 2.0 motherboard header.
  • 2 Port Internal USB 3.0 Motherboard Header Adapter Cable
    • Easily connect the USB drives to the motherboard for a cleaner external aesthetic

Spiritual Death and Rebirth

Once all the components came, installing them was tricky because the BitFenix MicroATX Case is a strange case not mean for a NAS, but I made it work anyway. I’ve considered moving my personal machine components to it, but that hasn’t happened. This would be one of the first lessons in this journey.

Road of Trials

Testing

Thanks to the plethora of FreeNAS information, I had plenty of tools ready to test the components before moving forward.

My testing was predominately done using Ultimate Boot CD, which included CPU and memory testing tools. Memtest passed as well as CPU burn in. I finally noticed that one of the not HDDs passing my test. RMA!!!!

A new drive arrived, but wasn’t being discovered at all. Even by my personal computer. That’s two White Label HDDs RMAed. I contacted the company, they had no problem with the RMA and even explained they’d test a HDD before it’s sent to me. I guess that shows you the profit margin on these White Label HDDs.

Installation

Installing FreeNAS is super simple. You really don’t need to know any command line knowledge at all. When you get to the screen that says “you want FreeNAS”, you just hit the “hell yes” button, and within 5 minutes or so, your FreeNAS is installed, and displays a few options for additional administration and an IP address to visit.

Refusal of the Return

After the testing and installation, I began to configure the system how I expected to use it. Creating Windows shares, labeling drives within the system, and one of the most rewarding aspects: finding and customizing shell scripts to perform testing and reports.

I was able to find scripts that can email SMART reports, UPS reports, and security reports to start. Learning shell scripting is something that I’ve been able to carry on to other server operations which would come later. One of the most important scripts creates a daily configuration file so that in case something happens to the OS or hardware, I’ll be able to get back up and running on a new system and maintain my configurations and pool.

The Return

With everything setup, I could finally transfer my data from one NAS to another. Since there’s about a 10 year difference between architecture, there was also about about a 10x difference in speed. The ReadyNAS could sustain a transfer rate of less than 30MBs where as the FreeNAS can get to just about 100MBs. At the <30MB rate, it would be a day and half of transfers.

Once everything was said and done, I was able to begin scheduling various backups and file syncs to bring myself back to a fully redundant workflow.

Freedom to Live

Thanks to several built-in plugins, I’ve been able to enjoy services such as Plex to watch videos on my FreeNAS from my Amazon Firestick.

I finally decommissioned my ReadyNAS, and sold both the working and non-working NV+ devices on eBay for some scratch money, and had the old hard drives destroyed.

The process was long, but fulfilling beyond just having my data redundant. I’ve been taken advantage of Virtual Machines and learning Linux in general, which is allowing me to grow further.

Filed Under: FreeNAS Tagged With: freenas, nas, raid

You Need a Backup and Disaster Recovery Plan

June 15, 2019 by Aaron Weiss

You’ll never realize that a backup and and disaster recovery plan will help you sleep better at night if you run a website, even if you never have to recover your website.

Recently, two hosting platforms and their users suffered missteps.

a2 Hosting, a shared hosting provider I’ve been using since 2013, has had their Windows servers shut down for over a week as the company suffers from a ransomware attack. Additionally, the available backups the company has for customers appear to be over 2 months old.

DigitalOcean mistook a user’s script as a crypto-mining operation, and shut down a startup’s servers.

I’m fortunate to not be affected as my a2 Hosting account is Linux-based, and my DigitalOcean VPS is a low-profile risk. However, this is devastating for these company and their users. I’m sure there are terms of service policies that cover these hosting companies for situations like these to a certain aspect.

There’s a much to learn from these situations, and this is a good time to reflect on having plans for your website in situations like these.

Restorable backup plan

There’s no excuse not to have a backup plan and infrastructure for your computer, websites, and any important data. Here are some of the backups I have set in my digital life:

  • For my main computer, I have a full weekly backup with daily incremental backups, that are then synced to my FreeNAS box, which are also synced to a Backblaze B2 Bucket.
  • For my FreeNAS server, I have a backup of the config file that is backed up to Dropbox, Backblaze, and mirrored on a second USB drive.
  • For my websites, my entire cPanel host instance is backed up each week, and then downloaded to my FreeNAS server. The individual websites have backups with BackupBuddy which have weekly and daily schedules relative to their respective performance, which are then synced with Dropbox. Some sites also backup to Amazon S3.

As you can see, take my data very seriously. Some data has 3 or 4 destinations. I’m ready to launch a new computer image, FreeNAS box, or return my entire cPanel instance or individual website back from the dead.

In fact, I recently had a botched release of improvements to this very website that went poorly. I was able to bring the site back up in less than 30 minutes because I have the infrastructure and documentation in place to recover.

Disaster Recovery Plan and Exercises

Just having a back up isn’t enough. Knowing how to recover those backups is an important aspect too.

In the case of a2 Hosting, had someone had recent backups of their website, they could have found a new service, restored their backups, and changed their DNS to the new service. After DNS propagation, a website could return to full operations within 24 hours at the latest.

At my day job, I’ve participated in Disaster Recovery Exercises where I help validate whether or not applications I use can perform critical tasks after the recovery begins. It’s a boring exercise, but I now see how important it really is.

My recommendation is to have a test environment that is nearly identical to your site’s live environment, do something to make it no longer work, and then restore the site from a backup. You might even want to try and see if you can also find a new vendor and restore your site to that vendor. Having that knowledge will help you sleep better at night.

Planning for the Future

Despite this situation, I’m still sticking with a2 Hosting and DigitalOcean for the immediate future. a2 Hosting has been a great partner, and I have only a few support tickets with them since I started. I know if I was on the other side of the table, I’d be furious. Companies make mistakes, and no company is infallible.

The moral of this story is: a company as large as these two companies are, they should have had their customer’s data backed up to a separate location (although customers should be responsible for their own data) and had a plan in place to return their service to functionality at a quicker pace.

You don’t have to be a2 Hosting or DigitalOcean, or their users. You now have the knowledge to be better.

Filed Under: Website Administration Tagged With: a2 hosting, amazon s3, backblaze, backup plans, backups, digitalocean, disaster recovery, dropbox, freenas

Primary Sidebar

Recent Posts

  • Learning Graylog for Fun and Profit
  • Using Raspberry Pi For PiHole Ad Blocking and Network UPS Tool Monitoring
  • How FreeNAS and WP-CLI Grew My Interest in Linux and Automation
  • Clickbait Headlines and what John Mueller Says
  • Introducing Bunk SEO

Categories

  • Film & Television
  • FreeNAS
  • Google Analytics
  • Guitar
  • Lifestyle
  • Projects
  • Search Engine Optimization
  • Technology
  • Website Administration
  • WordPress
  • Home
  • Blog
  • About Aaron Weiss
  • Contact
  • Privacy Policy
  • Affiliate Disclosure

© Aaron Weiss. Built with WordPress using the Genesis Framework. Hosted on a2 Hosting and DigitalOcean. Aaron Weiss's LinkedIn Profile