zfs.rent

home | store | pricing | updates | manage your account | status

updates

A changelog-ish journal of what's going on.


February 22nd, 2021 [permalink]

Author: Ryan Jacobs

Hey everyone,

It's been a month and a half since the last update... There has been some positive developments occurring behind the scenes. First of all, I signed an offer for a full-time Software Engineering + DevOps role at Clostra.com. Secondly, I sold a 20% stake in the company (Radious Subsystems) that runs zfs.rent. This secures enough funding to onboard two part-time employees to handle frontend development and customer support/billing.

Email + CRM

Previously, I was using a single email address (ryan@radious.co) and forwarding inquiries from support@zfs.rent to it. However, at this scale, it is becoming difficult to sort through customer emails, various infrastructure support/invoices, and purchase receipts. All emails were being interleaved which was confusing the CRM ticketing system.

From now on, please email support questions to support@zfs.rent. This will prevent your email from being lost in the noise and allow us to get to you quicker.

Fine-tuning the Business Model

If you were an initial beta user, no worries -- whatever was initially offered is still available to you.

That being said... we are removing the rent-to-own model. It is simply not feasible to finance hundreds of hard drives for users. Going forward, we will only support drop-shipping new drives from Amazon/Newegg/etc or shipping existing drives to our business postal box. This simplifies the number of SKUs/combinations in our pricing matrix, too.

However... before changing our business plan, we were building up our hard drive stock. We have about 20 drives ready to sell outright (8TB WD NAS Red for ~$200/ea after tax). These are listed in the /store until they sell out. Also note, these drives are currently loaded in ready-to-go hypervisors and can be deployed within the same business day.

If you were an initial beta user planning to go with the rent-to-own plan ($25 setup fee + $10/month per drive for 24 months), please contact us via the new email address and we can arrange that for you. There is no rush. If need be, we will purchase a new drive for your rent-to-own plan. For new customers, you will need to supply your own drives. Sorry :/

Also, due to one-too-many trips to the datacenter, drives will only be loaded on the first of the month, after March 14th (π-day) -- so plan accordingly. Until then, we will still load the drives on an ad-hoc basis. Of course, if issues crop up, we will make a trip to the datacenter within a few days to solve the issue.

Billing

In order to help cashflow and simplify billing, we will be moving away from our current monthly invoicing system (Xero). We will be using Shopify to offer 3 SKUs: 3-month, 6-month, and 12-month service packages. These are priced at $10/month --> so $30, $60, and $120 respectably. 15 days before your service expires, we will reach out to you. (Generally, we will be fairly lax about late payments. We will give you plenty of heads-up before we begin the process of returning your drive.)

Since we are consolidating our drive-loading process, we are eliminating the $25 drive setup fee for new drives! This keeps it simple. $10/month flat.

SAS Cards

I'm happy to announce that we will be moving to LSI 9211 cards. There were a couple of roadblocks migrating to SAS cards. First of all, I initially purchased LSI 9260 cards. Note: these cards cannot be used in passthrough IT mode, which is required for software RAID systems such as ZFS. These cards only support hardware RAID configurations. That burned a couple of weeks as I waited for a shipment of LSI 9211 cards. Second issue: RHEL/CentOS 8 stripped the LSI 9211 drivers from their preconfigured Linux kernel, despite them being in the upstream mainline (see https://access.redhat.com/discussions/3722151). Fortunately, we are able to install the linux-ml package from ELRepo along with the proper development headers to build zfs-dkms (which is used on our hypervisors for VM snapshots). On top of that, zfs-dkms cannot be built with the default CentOS "development tools" package group. I had to finagle a gcc-8 install. (This isn't really relevant for end users, because it's only used on the base hypervisor systems... but I just wanted to emphasize how much of a pain RedHat has made using these SAS cards...)

All in all, the benefits are threefold:

  1. Cable management is much easier with the SFF SAS cables.
  2. PCIe bandwidth is bumped from 1 Gbps (Marvel SATA cards) to these 5 Gbps (LSI 9211 cards).
  3. We can support enterprise SAS drives in addition to consumer SATA drives.

Also, I want to give a shoutout to the patient customer who was our guinea pig for the SAS cards :)

ARIN

We are still in the midst of securing a chunk of IPv4 addresses to reduce our cost-basis. More updates on that later.

Thanks for the support everyone! We appreciate your business and hope to stay in it for the long run.

-- Ryan

January 11th, 2021 [permalink]

Author: Ryan Jacobs

Hey all,

Sorry for the delayed update. My schedule has been packed lately. Due to a few straggling shipments, hypervisors #4 and #5 will be going online tomorrow instead of today. I'm also catching up invoicing and emails today.

Dashboard

Ugh... sorry guys. I know it's been a long wait. I've been migrating the frontend I had in-progress to a more turn-key solution (Svelte + Vercel). IT WILL BE LIVE TODAY. If you have shipped a drive, you will be able to pull up photos that I took before loading it in the hypervisor. I am also emailing you the photos. (Mainly for those who decided to drop-ship drives -- and therefore have never seen the drives or their serials.)

I'm also building an "equity page". It will show how many payments have been made for your drive(s) and an option to pay them off.

Rent-to-own Availability

We ordered more drives. I will be reaching out to the next batch of rent-to-own users this week. If you are able to pay for your drive outright, I can prioritize your order. Please let me know if this is the case for you.

ARIN

We have successfully registered AS399197 with ARIN. We are currently working on securing IP transit and obtaining additional IPv4 addresses. Ideally, this will lower our bandwidth costs.

Misreported data usage

Some users have mentioned seeing ~4.00 GB of daily bandwidth consumption for their idle machines. This was caused by local broadcast traffic. The reports should be more accurate now. Give it a shot by running zz.

Happy Monday!

-- Ryan

It's easy to zz. - January 2nd, 2021 [permalink]

Author: Ryan Jacobs

Hey folks,

I hope everyone had a festive new year's celebration! Couple of items on the agenda:

zz command-line tool

zz is a quick n' dirty tool to check your instance's data usage / drive temperature stats. Demo video here: zz/demo.mp4.

Now that the year has rolled over, we will be tallying data usage according to our pricing page. I'm about to render this on the website, but for now, this tool accomplishes the same thing.

Take a look at the source code to see which API endpoints are being used. Feel free to use the endpoints yourself. The backend looks at your source IP to determine the instance. All of the data exposed is read-only. In order to modify your machine with zz, you will need to set your API key.

Attaching .iso files, triggering hard-reboots, and opening a VNC are all on the roadmap.

# Download compressed binary
$ wget https://zfs.rent/zz.gz
$ gzip -d zz.gz
$ chmod +x zz
$ ./zz

# Run from source
$ git clone https://github.com/radious-subsystems/zfs.rent
$ cd zfs.rent/zz
$ ./build.sh
$ ./zz.js

Obtaining an ASN

We are in the process of registering for an ASN and acquiring a set of IPv4 and IPv6 addresses for long-term use.

Moar machines!

Hypervisors #4 and #5 will be deployed on January 11th. If you are in this round, you have already been notified. Hypervisor #6 will likely be deployed on January 18th.

IPv6

A couple of users have been asking for IPv6 support. It will be rolled out this weekend. Your DHCP client will pick up the router advertisements. Disable IPv6 ahead of time if you are not interested.

Cheers!
Ryan

Progress Update - Dec. 25th, 2020 [permalink]

Author: Ryan Jacobs

Happy Holidays!

First of all, I want to wish everyone a relaxing winter break.

I am taking a couple weeks off my dayjob to help enroll new users. Response time via email should be less than 24 hours. Feel free to email directly or make post on our GitHub discussion board.

Open-Source

In the spirit of transparency and longevity, we are open-sourcing our software, database schemas, and runbooks. Safely migrating everything to the public repo will probably take a week or so.

Commitment to Communication

It has been about two weeks since our last public update. In the mean time, we have on-boarded several users and gathered feedback. From here on out, there will be a weekly update posted on Fridays. In addition, minor status updates may be posted on our Twitter.

API

As we mentioned in our philosophy.txt document, we strive for an API-first system. Everything exposed on our dashboards/graphs can be called directly over JSON over HTTP APIs.

The current API is being fleshed out. But please check out these two endpoints for hypervisor #1:

Similar endpoints exist for user data, (e.g. your own drive temperature stats and bandwidth usage.)

Accounts

In order to simplify our on-boarding process, we have decided to use email-based auth codes for the time being. If you do not have account, simply enter your email in the login dashboard and one will be created for you. A corresponding API key will be created for you automatically.


Hardware Photos

(Hover for captions.)

Roadblocks: Software, Hardware, and Logistics

Here are a few roadblocks (and solutions!) we have encountered over the past two weeks.


Hardware Issue #1 - Temperature

Shortly after launch, we assembled a 2x 8TB system in a small 1U chassis to benchmark disk speeds and log temperatures within the datacenter. Unfortunately, cooling was an issue. We did not install any fans directed at the drives. (Note: "m2_drive" refers to "machine-2", not the M.2 device standard.) The cyclical spikes you are seeing correspond to the datacenter's ambient temperature day/night cycle. Next trip, I plan to install some ambient temperature sensors in the rack.

These issues have been mitigated with our 4U chassis. In each chassis, we install five pressure-optimized fans in a push-pull configuration. There are three intake fans that push air directly into the drive cages. Each drive is separated by roughly 0.25 inches and the air flows through them, like fins. At the rear of the chassis, two exhaust fans accelerate the airflow.

With the new fans and larger chassis, the drive temperatures are in a reasonable range!

These plots are regenerated every 5 minutes. The backend software will be open-sourced generically at: radious-subsystems/metrics. (Note: the repo hasn't been made public yet. But it will be soon!)


Logistics Issue #1 - AMD CPU Stock

It was bad timing to launch a hardware-centric business in the midst of an ongoing silicon shortage.

We pivoted to Intel-based systems instead of AMD in order to meet demand. This has increased our costs slightly, but end-user performance should be roughly the same.


Software Issue #1 - CentOS 8.2 Redacts 2029 LTS (Long-Term Support)

In other news, IBM/RedHat have recently dropped their 2029 LTS timeline for CentOS 8.... which is a real bummer. Additionally, their upgrade to CentOS 8.3 broke new OpenZFS 0.8.5 installs. In the interim, we installed ZFS 2.0.0 on several systems and has been working smoothly. The ZFS team has a fairly solid track record of cautious updates and I trust 2.0.0 not to break anything.

Luckily, the OpenZFS devs are quick. Within two weeks, ZFS 0.8.6 now installs on CentOS 8.3 without any additional effort.

I anticipate that by the time Jan. 2022 rolls around, Rocky Linux will provide a viable transition path from CentOS 8.

Software Issue #2 - DHCP firewalling

This is a royal pain-in-the-ass because of Linux raw sockets. (https://superuser.com/a/1457487)

DHCP requests and responses skip over iptables and ebtables rules. I plan on doing a write-up of our solution in the docs later on. We use DHCP to map static IPs to each VM's mac address. It was important to guarantee that outside DHCP queries would not be able to leak in/out of our systems.

Dec. 2nd, 2020 [permalink]

/mail/002.txt

Nov. 28th, 2020 [permalink]

/mail/001.txt

Schema Finalized - Nov. 20th, 2020 [permalink]

Author: Ryan Jacobs

Eh... I thought this ER diagram generator made the DB schema look slick, so I'd might as well post it. All of our base entities are tracked in the database. It records their power usage, bandwidth consumption, temperature, etc. Diagram of the SQL schema in production.

I'm going to port over a project that I've been working on to plot lightweight .png graphs of the systems. (example 1, example 2) Look ma! No JS!

(Axes labels will be added to the zfs.rent graphs of course.)