• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Aaron Weiss

  • Home
  • Blog
  • About
  • Contact

Website Administration

Learn about my approach to website administration and optimization. Articles will include projects and experiments about maintaining a website.

Learning Graylog for Fun and Profit

September 30, 2020 by Aaron Weiss

Since I’ve been increasing my knowledge running my own VPS and VM servers on my FreeNAS/TrueNAS machine, I’ve learned the importance of logs.

Whether it’s a website, application, or operating system, most likely, they are producing various forms of logs. These logs hold the clues to generic operations and errors that can help you understand how well or poorly the application is working.

The problem is that these logs are constantly being generated and are not the most human-friendly. Operating systems alone generate tens of thousands of lines of logs. Keeping up with these logs is difficult. That’s where logging software comes in.

Of course, no conversation about logs cannot begin without mentioning Ren and Stimpy:

https://youtu.be/5Y0dGHkAkIY

How I Got Into Logging

At work, we use software called Splunk. At first, I could barely get a sense of what to do. This was because I didn’t understand our company’s infrastructure, and I was only given a query to execute to complete my specific task.

Later on, I got the idea of using this software for another project. After reaching out to areas who dropped this application into my lap for help, I got no replies and I was at a standstill. It didn’t stop my motivation to learn how to approach my goal because I found lots of potential in having this information.

Splunk is available for free, but has some limitations:

  • Alerting (monitoring) is not available.
  • There are no users or roles. This means:

    • There is no login. You are passed straight into Splunk Web as an administrator-level user.
    • The command line or browser can access and control all aspects of Splunk Free with no user and password prompt.
    • There is only the admin role, and it is not configurable. You cannot add roles or create user accounts.
    • Restrictions on search, such as user quotas, maximum per-search time ranges, and search filters are not supported.
  • Distributed search configurations including search head clustering are not available.
  • Deployment management capabilities are not available.
  • Indexer clustering is not available.
  • Forwarding in TCP/HTTP formats is not available. This means you can forward data from a Free license instance to other Splunk platform instances, but not to non-Splunk software.
  • Report acceleration summaries are not available.
  • The Free license gives very limited access to Splunk Enterprise features.
  • The Free license is for a standalone, single-instance use only installation.
  • The Free license does not expire.
  • The Free license allows you to index 500 MB per day. If you exceed that you will receive a license violation warning.
  • The Free license will prevent searching if there are a number of license violation warnings.

The majority of these limitations aren’t terrible. However, the first bullet item is where I have an issues: “Alerting (monitoring) is not available.” To have emails be sent to you if a event matches a rule that you create is one of the reasons why logging software is so powerful.

Therefore, I found Graylog as a possible solution. It allows for a similar interface as Splunk, notifications, and is extensible through plugins and content packs. Best part: It’s Open Source.

There are many available open source logging platforms, but Graylog was the first that i tried, and appeared to work well for me. The installation process is quite extensive. You will need to have reasonable familiarity with the command line, and I found you’ll need to have a minimum of 4GB of RAM to have a decent performance.

I installed Graylog initially in January 2020 using both the Documentation and a decent YouTube Tutorial which provided more instruction. It required me to install Java, MongoDB, ElasticSearch, and then Graylog. This stack is the backbone of Graylog, and is the reason why you need at least 4 GB of RAM. I first installed this on a VM in my TrueNAS/FreeNAS machine.

Having Graylog installed isn’t enough, you need to send logs to Graylog. Every system and application produces logs in different ways, and there are various ways to sent logs to Graylog. That is where I got the most hung up on working on this project.

Created a UDP Syslog format

The installation instructions from above help ingest rsyslogs to Graylog. Rsyslogs are a common logging format for Linux systems. I found configuring rsyslogs to send to Graylog on Linux rather simple and easy to understand the more I did it for my VMs and RPis.

Apache logs

What really took me some time was figuring out how to send logs from Apache servers. After lots of digging, I found some two older blog posts (here and here) about how others had did it years prior, and I finally figured out how to do it myself.

After some time, I was able to put together the following:

LogFormat "{ \"version\": \"1.1\", \"host\": \"%V\", \"short_message\": \"%r\", \"timestamp\": %{%s}t, \"level\": 6, \"user_agent\": \"%{User-Agent}i\", \"source_ip\": \"%a\", \"duration_usec\": %D, \"duration_sec\": %T, \"request_size_byte\": %O, \"http_status_orig\": %s, \"http_status\": %>s, \"http_request_path\": \"%U\", \"http_request\": \"%U%q\", \"http_method\": \"%m\", \"http_referrer\": \"%{Referer}i\", \"from_apache\": \"true\" }" apache_prod_greylog
CustomLog ${APACHE_LOG_DIR}/prod_ssl_apache_gelf.log apache_prod_greylog
CustomLog "| /bin/nc -u 192.99.167.196 1514" apache_prod_greylog

The first line creates a custom LogFormat.

The second line outputs that format to a new log file in the Apache log directory.

The final line pipes the format to a netcat command that sends the data to an IP address at a specific port. Note: IP Address is no live

Apache logs aren’t created unless there is traffic sent to the web server. After visiting the site on the same server that Graylog is on, it didn’t take long to see the data ingested in Graylog. This gave me the idea to continuously add more information to the Apache logs to suit my needs. I used the official Apache Log format documentation and some Loggly information in order to adjust the log formats to my liking.

Sending Remote Logs to Graylog on the Same Network

Now that I have both Linux system logs and Apache logs working, I replicated these steps on all my VMs, and Graylog was now obtaining logs from several VMs and two Raspberry Pis.

Sending Remote Logs to Locally Hosted Graylog

Once I had my own proof-of-concept Graylog instance running within my local network, I felt comfortable wanting to have the logs generated from this very site and server hosted at Digital Ocean sent to Graylog. This would require me to open my local network to the internet to reach my VM that housed Graylog.

This was another headache.

There’s lots of information on how to do this. Essentially, you would port forward traffic to your IP address provided by your ISP. That seems simple with most routers. However, my ISP provides Dynamic IP addresses that updates once-in-a-while. That’s a simple workaround with Dynamic DNS, where software check frequently to see if your IP address to the your ISP changes, if it does, it updates the software.

Well, that’s where I really got stuck. It turns out that that my internet is behind what is called a Carrier Grade NAT or CGNAT. There are a finite amount of IP address throughout the world, and the availability of these IP addresses is shrinking. To accommodate the amount of IP address an ISP may have, they may place certain neighborhoods behind a CGNAT.

The concept is similar to having a router in your home. A router creates a network for all the machines on that network that includes an IP address and creates new IP addresses for each machine that connects to the router. A CGNAT does the same thing for certain neighborhoods. Therefore, my home internet was assigned an IP address within the neighborhood network, and the neighborhood network that I’m a part of has an single IP address that broadcasts to the rest of the world. This means that port forwarding and dynamic DNS was not available.

This is where I recognized that I had hit another wall. I did find another option called ngrok, but it too was didn’t work the way I would like. After looking my known options, I chose to pack up my Graylog project for the time being.

Learning Splunk and New Motivation

I finally got mentorship with Splunk and after playing with it more, I was able to approach my goal at work and saw more possibilities with logging that reignited my interest with Graylog.

Since I was now a full year into managing a VPS for aaronweiss.me, I felt it might be an excellent opportunity to launch another VPS with the sole goal of logging. However, I knew I needed more RAM than my current little $5 droplet at Digital Ocean. To have a droplet with a minimum of 4 GB of RAM would be $20 per month. I felt that was too steep for a little project like this, which led me on a journey to find a VPS that had the resources I needed at a reasonable price.

Looking for low-priced VPS with that amount of RAM is not difficult, but there are companies you need to vet as they could be fly-by-night operations. I had located Hetzner and OVH first.

Hetzner has extremely low-cost VPSs available. A Single Core 4 GB VPS would be 5.68 euros which was roughly $6.73 per month based on the currency rate at the time of publication. Hetzner’s servers are located in Germany and Finland. Given that I’m solely concerned about logs, I didn’t need low latency, and this would have been okay.

I had also found OVH, which is a French company with servers located world-wide. I found they had a server Quebec for $10.58 with 2 vCPU and 4GB of RAM. I chose to start off with this company. After about a day of setting things up, I was able to ingest logs from my own website, VMs, and my Raspberry Pis and it was working very well.

But I wanted to reduce that cost even more. I finally found VPSDime which offers a $7 VPS with 6 GB of RAM and 4vCPUs. These resources at that price was suspicious to me, but after due diligence, the amount strong service and support reviews, I thought I’d try it out. The extra resources make a huge difference in speed. When I would restart Graylog or any portion of the stack on my VMs, it could take about 5 minutes to load. OVH took about 4 minutes to load. VPS Dime takes about a minute or less.

Support was great when I had an issue. Surprisingly, it wasn’t my issue or theirs. It was the lovely CenturyLink outage that occurred on August 30th, 2020.

Admittedly, OVH nor VPSDime’s interfaces were not nearly as intuitive as Digital Ocean, but I was able to navigate through VPS Dime just fine.

Monitoring and Events: My Use Case for Graylog

As I stated earlier, one of my primary goals of creating this logging infrastructure was to be able to have notifications sent when certain conditions trigger.

For time to time, I was getting an “Error Establishing a Database Connection” from WordPress on this website. Since I don’t go to my own website often, this error and downtime can occur for days. Unsure of when and what caused this, I had a difficult time finding the MySQL error log to see what cause the error. Luckily, restarting MySQL quickly restarted MySQL which brought the website backup in less than 10 seconds.

Among the reasons for this error are:

  • Incorrect database credentials
  • Corrupted database
  • Corrupted files
  • Issues with the database

Once I got Graylog up and running, I created an alert for the website any time a 500 error occurred that would email me. Finally, in late September, I received several dozen emails from the previous 8 hours while I was sleeping from Graylog stating that there was a 500 error. Lo and behold, my website was down with the “Error Establishing a Database Connection” notification from WordPress.

After I restarted MySQL, I found the first time this error occurred in Graylog, then found the MySQL log file and time it occurred. The error stated:

2020-09-26T06:02:14.370246Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).

A quick search pointed me to a Stack Overflow question and answer showing how to enable explicit_defaults_for_timestamp in MySQL. So now, it’s just waiting if this Database Connection occurs again. When it does, I’ll have the tools to search, discover, and investigate again.

Update: The issue wasn’t just the explicit_defaults_for_timestamp, it was also because my system needed a swap file. Despite the warnings that swap files can contribute to faster SSD degredation, I followed this Digital Ocean tutorial for Ubuntu Server 18.04 for create a swap file. Since then, there have been no MySQL failures.

The Future

Logging has been fun and I have a better understanding of how I can monitor the logs that each of my servers and websites produce. I still need to figure out how to get PHP and MySQL logs sent to Graylog, but I’m sure I’ll overcome that obstacle in due time.

Logging provides a smoking gun to why something has occurred, it may not provide the best understanding of what is running or other stats. That is where Nagios and Grafana come in to provide status monitoring and statistics and graphing.

Additionally, as I discovered in this journey, I’d prefer to run this Graylog instance on a Virtual Machine at my home, rather than spend another $7 per month for a VPS. I’ve looked into the use of a remote VPN server that will circumnavigate the CGNAT for a direct connection to the VM. That will ultimately provide more features than just allowing my remote VPS hosting aaronweiss.me to point to my VM. I could also allow this VPN to connect to other portions of my network such as accessing my Plex installation or using the same VPS server to run PiHole to block ads. a VPN server doesn’t need the same resources as Graylog, and could cost less and even us Digital Ocean if I wish.

Filed Under: Website Administration Tagged With: elasticsearch, graylog, logging, truenas, wordpress

How FreeNAS and WP-CLI Grew My Interest in Linux and Automation

April 6, 2020 by Aaron Weiss

Last year, I built a FreeNAS server. Initially, it was only meant as a means to store my computer backups and house my music and videos.

However, to do it right, meant I needed to perform commands in the shell, mostly to test the hard drives before I began to store files on them. I found an excellent resource, but I didn’t know what any of commands meant. I executed them and waited until they were done.

The same was for Bash scripts to automate system configuration backups, reports, and notifications.

It was when I stumbled across a some YouTube videos on how to run an Ubuntu Server to host your own websites did I finally test the Virtual Machine waters FreeNAS offered. I installed  Ubuntu 18.04 Server LTS on a VM, and learned a little at a time. The idea that I could learn a new operating system without buying another computer floored me.

Setting Goals

With VMs, CLI, and some basic web server understanding under my belt, I was ready to take a leap and move aaronweiss.me to a Digital Ocean server, but with the following goals:

  1. Separate WordPress Environments:
    • Development (DEV): Any new plugins, theme enhancements, or other changes that would affect the WordPress installation or how the software worked would be developed and tested on WordPress installation.  Plugin, theme, and core updates would also be completed and tested on this server.
    • Quality Assurance (QA): This environment was meant to test any changes made in the DEV environment as if it were a functional website. No changes would be made to this environment except common WordPress functions such as adding and managing posts and pages.
    • Production (PROD): This would the live website visible to the public. Like QA, major changes would not be made on this environment.
  2. Automated Deployment Scripts: Deploy changes from DEV to QA and then QA to PROD
  3. Maintenance Scripts: Create a script to check for security vulnerabilities, cleanup temporary files, backup site, optimize database, and compress images on all three environments.

The above goals meant I could successfully, host, develop, and maintain my website using a secure approach with lots of ways to quickly get up to speed if something were to happen.

Additional Achievements Unlocked

Once I achieved these goals, I was hooked on what else I could do. My next set of goals were:

  • Create an automated Digital Ocean snapshot script. Digital Ocean has a backup options, but only does so once per week. That didn’t fly with me, so I wrote DOCTL Remote Snapshots as a way to have some control of how often and how many snapshots would be created.
  • Learn GIT – I’ve had some Git knowledge through Microsoft Team Foundation Server at work. However, it was time to really learn Git. I combined this with my DOCTL Remote Snapshot script and now have a published repository.

Next Up:

  • Create a website monitoring script. I don’t need server up time, I need to know website up time. I want to know that my website can fully load and perform its basic tasks throughout the day.
  • Build a Raspberry Pi and install:
    • PiHole. PiHole is an free, open source ad blocker.
    • NUT (Network UPS Tool). The goal of this is a script to monitor two computers from Raspberry Pi and shut them down gracefully using one Uninterruptible Power Supply. I currently have two UPSs, one for my primary computer and one for my FreeNAS. The primary one can handle up to 850 watts which is enough to cover all my devices, but only has one UPS port to monitor the primary device. Ideally, NUT will allow monitoring over Ethernet and can handle the shutdown of both machines.
    • Additionally, these two programs also feed my yearning to want to build and learn Raspberry Pi.

These are some short-term goals that I think are obtainable for the future.

Filed Under: Website Administration Tagged With: Digital Ocean, DOCTL, linux, ubuntu, virtual machine, wordpress

Automated DigitalOcean Snapshots with DOCTL

December 22, 2019 by Aaron Weiss

DigitalOcean snapshots are a blessing if you’re clumsy like me. They’ve allowed to me to recover from my mistakes, and even a hacking situation.

However, I’ve been disappointed with one aspect of DigitalOcean. Their backup plans for Droplet only create one backup per week and you cannot schedule yourself. They have snapshots which can be created ad hoc through their dashboard, but that’s not a way to live life.

I discovered DigitalOcean has it’s own command line interface (CLI) called DOCTL which allows you to access your DigitalOcean account and droplets remotely on a Linux machine.

After learning about this, I immediately wanted to leverage this with the following goals:

  1. Shutdown the server
  2. Take a snapshot as that is safer and reduces the chance of corrupted files
  3. Reboot the server
  4. Once the server is back on and live, delete the oldest snapshot if there are more than a certain amount.

This would keep my server lean and have a two backups a week for a maximum of 4 weeks if you consider their backup plan.

Table of Contents

  • Introducing DOCTL Remote Snapshot
  • Installation and Authentication
    • Install DOCTL on a separate Linux installation
    • Obtain the DigitalOcean API Key
    • Authenticate Your Account
  • The DOCTL Remote Snapshot Script
    • Configure your Snapshot
    • How to Execute the Script
    • Cronjob
  • Conclusion

Introducing DOCTL Remote Snapshot

The DOCTL Remote Snapshot script I’ve created is among several firsts for me:

  1. Learning Git and using Github
  2. Using DOCTL
  3. Publishing and maintain a public repository

I’m proud of this script, and I’ll be continuing to improve upon it. With no further ado:

Installation and Authentication

Install DOCTL on a separate Linux installation

Since our script will require us to shut down the droplet to prevent any corruption with our DigitalOcean snapshots, we’ll need a separate machine to make the remote calls and schedule the script via cron. If you run this on the same Droplet, it will shut itself off and that’s it. That is because with the server off, the rest of the script cannot be executed.

As an exmaple, I have a separate Ubuntu Virtual Machine (VM) running on my FreeNAS server that I setup specifically for cronjobs to execute scripts for remote services such as this script.

If you’re installing a fresh Ubuntu 18.04 LTS Server install, you can opt to have DOCTL installed alongside the server from the get-go. Otherwise, you’ll need to follow the GitHub documentation to install it. There is also this super awesome community-written guide.

Obtain the DigitalOcean API Key

This is the first step as it will be required to connect your script with your DigitalOcean account.

  1. In your dashboard in DigitalOcean, visit the API page: https://cloud.digitalocean.com/settings/api/tokens?i=74b08a
  2. Generate new token

    DigitalOcean Generate API Key
    Click here to enlarge
  3. Enter the name

    Name API Key
    Click here to enlarge
  4. Copy the new token

    DigitalOcean API Key
    Click here to enlarge

Authenticate Your Account

On your remote server, run the following command:
sudo doctl auth init

Then you’ll be prompted to enter your key from the first step. Once that is complete, you’re ready to use DOCTL on your server.

The DOCTL Remote Snapshot Script

Next, you’ll want to run the following command in the directory where you’d like this script to be executed.
git clone https://github.com/aaronmweiss/DOCTL-Remote-Snapshot.git

This will extract the GitHub repository to a directory titled DOCTL-Remote-Snapshot.

Configure your Snapshot

You’ll then want to edit the dodroplet.config file and supply the following:

Variable Name

Variable Explaination

dropletid=
Your Droplet’s ID. If you do not know your droplet’s ID, log into your DigitalOcean account, click on the droplet, and the URL of your droplet will contain your Droplet’s ID after the /droplets/ directory, like so: https://cloud.digitalocean.com/droplets/XXXXXXXXX/graphs?i=78109b&period=hour. The “XXXXXXXXX” in the URL string is your droplet’s ID.
numretain
Enter the number of snapshots you’d like to keep as a positive integer.
recipient_email
The email address you would like to recieve completion notifications.
snap_name_append
Optionally add additional information to the end of the snapshot’s name.

With the configuration file ready to go, you’re now ready to remotely execute DigitalOcean snapshots.

How to Execute the Script

Now it’s time to the run the script. To be the on the safe side, let’s be sure the script can be executed:

sudo chmod +x auto_snapshot.bash

Now it’s time to run the script:

sudo bash auto_snapshot.bash

Give this some time to run. Once the droplet is powered off, it will create the snapshot which is the longest part of the process. According to DigitalOcean “Snapshots can take up to 1 minute per GB of data used by your Droplet.” Although, I’ve found it might take much longer.

Once the snapshot is complete, the droplet is powered on again. The any snapshots beyond the value in numretain will be deleted. You can use the -r flag to bypass any snapshot deletions.

After this is complete, a notification is sent to the user’s email supplied in the dodroplet.config file.

Cronjob

This works best as a cronjob. You can do this by running sudo crontab -e and entering something similar to:

* 1 * * 3 /bin/bash /home/$user/autoscripts/auto_snapshot.bash

Where $user is your username on your Linux machine. Although, you might want to use this in another location on your Linux installation.

You can use Crontab Generator to generate the cronjob command for you.

Conclusion

That’s it. This is my first published Bash script and GitHub repository. I’m extremely proud of this script, although it’s rather simple. It accomplishes a need that I had that was not readily available elsewhere.

It is my hope that you’re able to use this to automate your DigitalOcean snapshots and ensure your droplets are safe so you can continue to build up on your projects. Feel free to fork it, contribute, and comment on Github.

This article was updated on March, 10th, 2020

Filed Under: Projects, Website Administration Tagged With: bash, Digital Ocean, digitalocean, DOCTL, git, github, linux, snapshots, web server

How My WordPress Website Got Hacked and How I Recovered

November 26, 2019 by Aaron Weiss

My WordPress website was hacked, and it was super embarrassing.

Just when my recent blog post about why you shouldn’t download nulled versions of BackupBuddy was starting to rank well for various keywords and gaining some decent traffic, my site began to redirect to another website. I couldn’t log into my website at all. I wasn’t able to find much information about this particular hack to fix it especially since I couldn’t gain access to my site.

However, I still had access to my server, and because I had an awesome disaster and recovery plan I was able to return my website back to a running instance quickly.

Why did my Website get Hacked?

I have not figured out what exactly happened. It could have been a bad plugin, which is making me reconsider what plugins are really necessary. I’ve always felt that the plugins that I’ve chosen were solid, but time to weed out plugins whose features can be moved to a functions.php file or other implementation.

I had also moved to Austin, TX, and not updated my site as I normally had done. I’d say this was my biggest mistake. I should have found time to maintain my website. I knew this in the back of my mind, and I didn’t commit to it.

How I Recovered My Site

Typically, I wold have ran a BackupBuddy recovery using importbuddy.php. However, since my website and dashboard was redirecting to another website, I was unable to access my site from a browser. Therefore, that was out of the picture.

Since I still had access to my server, I was able to utilize Digital Ocean’s backups and recover my site from a version that was less than one week old. Given that I didn’t have any new publishes or changes made to the website, this was fine and worked.

What are the Plans for the Future?

Essentially, better maintenance and updating of the website and platform on a more regular and automated basis.

I’ve previously created Bash scripts that check the site’s core installation, theme, and plugins for any known CVE vulnerabilities, created a full site backup, then optimizes the database, and notifies my by email was updates are available. However, the CVE vulnerability check stopped working. Since I was busy moving, I never had a chance to see this gap. However, this has been corrected as of late.

I don’t believe in automatic updates as any update can cause problems and I like to test updates, especially core and theme updates, very carefully before I commit. So my future automation will take that into consideration.

How Do I Feel About This Now?

I’m okay about it. It’s embarrassing, but I’ve also realized that it’s okay. It happens. I had a plan to recover and executed it perfectly. This happens with WordPress websites, and it gives me a chance to recognize the gaps in my WordPress maintenance and re-commit to what’s necessary for my website.

The absolute worst thing about this is that I lost lots of momentum with some SEO traffic for my BackupBuddy article, but that’s the name of the game. I believe if I continue to work on creating a great website, I don’t have much to work about long-term and my rankings will return.

Filed Under: Website Administration, WordPress Tagged With: backups, hacked, wordpress, wordpress maintenance

Shared Web Hosting is Becoming More Expensive

August 15, 2019 by Aaron Weiss

If you’ve used shared web hosting for your website, most likely you’ve been using cPanel. This is a popular website management platform which allows web hosts to provide users with their own massive amount of features on one server.

Recently, cPanel has announced new price increases. Some hosts like GoDaddy, HostGator, Blue Host, etc., have stuffed tens of thousands of user accounts on their servers at a low cost because of the close competition between these hosts. Since the competition for web hosting is already highly competitive, the margins are thin. These price increases are going to reduce these companies’ revenue, and of course are going to pass along those costs to the end-user.

I’ve enjoyed a significant amount of value on my a2 Hosting account for years. Thanks to the value that cPanel provides, I’ve been able to host many different websites and enjoy email features at less than $10 per month. I’m not concerned about my costs going up any time soon as I paid for 3 years in advance, locking me in. However, I’m still impressed by the increasing amount of value I get from this service.

Calculating cPanel’s Increases at Scale

The cost increases at first might not seem like much. At scale, they aren’t that significant for larger hosts.

Their Premier license is $45 per month and allows up to 100 accounts on the server. Each additional account is $.20. The aforementioned hosts have thousands of accounts on the same server.

Let’s imagine you have 1,000 accounts on the same server:

$45/per month

900 accounts x $.20 = $18

That’s $63 per month. $756 a year per server.

Most hosts tens of have hundreds, thousands, or tens of thousands of servers.

Smaller Hosts and Developers Will Feel the Pain

The larger hosts I discussed earlier aren’t the ones that are going to suffer the most. It’s the smaller hosts and resellers such as website developers and designers who host their clients’ sites are also going to be hurt.

It may seem like we’re talking just a few more dollars each month, but at scale, every dollar counts. These cPanel users have thinner margins, and their real take-home income comes for churning product or designs, not the hosting.

Open Door for Competitors

This will surely open the door for competing website hosting control panels and dashboards to target the smaller agencies and resellers. Granted, many of them are far behind when it comes to features and community support.

It’s hard to say how much this will affect end-users, you know, small-time bloggers like you and I. I expect pricing to increase, but not nearly as much that places it out of reach. Most entry-level cPanel shared hosting accounts are around $3-5 per month on the low-end to $12 on the higher end, when paid a year in advance. If each new account charges the host another $.45, I expect the costs to increase about $1.

That’s not a major change for most low-cost shared web hosting bloggers are either recreationally managing their website or community, or have higher-priced plans that serve larger amounts of traffic.

New Era of Shared Web Hosting

Because of the increased costs and new avenues for shared web hosting control panels have opened, I do see there will be a new era in this hosting industry.

cPanel may make shareholders happy in the short-term, but unless they can innovate at a much faster pace than the competition can meet them at parity, I suspect cPanel’s market share will diminish.

In the meantime, if you’re in the market for shared web hosting, I recommend a2Hosting and locking down your rates now before they increase.

Filed Under: Website Administration Tagged With: a2hosting, cpanel, shared hosting, website control panel

Infinite Scroll is the Worst

June 27, 2019 by Aaron Weiss

I don’t know when it started, but I hope it stops soon.

Infinite scrolling on websites annoys me.

Why? Because like the infinite scroll trend, I never know when a website’s page it’s going end. It just keeps going, without purpose.

The Pros of Infinite Scroll

Few.

  • Feeds. I find infinite scroll in social media feeds makes sense. Users post countless material on these networks, and displaying posts in a descending order is an appropriate way to show that content.
  • Loading. It’s also helpful because you don’t have to wait for a page to load new information, it just appears.

That’s all I can ponder.

The Cons of Infinite Scroll

Far greater.

  • The great unknown. You don’t know when it ends. It just keeps going. Pagination at least shows how many pages to expect.
  • You can’t find what you’re looking for. This includes using a browser’s find on page feature.
  • Lack of control for a user.
  • I expect there to be a footer on most websites.
  • Doubt it’s search engine friendly. How does a crawler discover content not yet rendered on a page?

That’s it. Just short rant about something I hope goes away.

August 1st, 2019 Update: Looks like might have fished my wish, and I’m not the only one who feels this way.

A new bill has been introduced to ban infinite scrolling, autoplay and addictive technology as a part of the Social Media Addiction Reduction Technology Act (SMART).

One of the things I did not consider is the accessibility impact of infinite or endless scrolling, or the potential addictive nature. Sometimes our politicans do have our best interests in mind.

Filed Under: Website Administration Tagged With: infinite scroll, rant

  • Page 1
  • Page 2
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • TrueNAS Virtual Machine with Ubuntu 18.04 Halting with Signal 11
  • Proxmox ZFS Disk Replacement and Drive Expansion
  • How To Create ZFS Backups in Proxmox
  • Multiple UPS on the same NUT-Server
  • Learning Graylog for Fun and Profit

Categories

  • Film & Television
  • Google Analytics
  • Guitar
  • Lifestyle
  • Projects
  • Proxmox
  • Search Engine Optimization
  • Technology
  • TrueNAS
  • Uncategorized
  • Website Administration
  • WordPress
  • Home
  • Blog
  • About Aaron Weiss
  • Contact
  • Privacy Policy
  • Affiliate Disclosure

© Aaron Weiss. Built with WordPress using the Genesis Framework. Hosted on a2 Hosting and DigitalOcean. Aaron Weiss's LinkedIn Profile