Where Is My OCR In 2019?

Yes, you heard right - it is 2019 and I STILL do not have any way to OCR an image natively in macOS Catalina. And no, Notes sucks. I do not want to OCR and index images, I want to convert images to text I can copy and paste and work with. To the best of my knowledge Notes only OCRs and indexes content to make it searchable, it does not allow you to copy text.

OneNote's OCR sucks. I'll add an image and after 30 minutes there is still no option to copy the text. Sometimes it takes up to an hour. That makes it a no starter.

It is 2019 and I assumed by now we will have native OCR support in our operating systems. So disappointed. At least I found this, but I should not have to.

Project #10: Side Table

I have one side table which I always have to wage war with my little one for real estate, so I decided to make my own side table instead of buying one. I did not spend hours designing it - it is really basic, made from Fir and finished with Polyurethane to maximize water resistance, as I will be placing various types of liquid on it.

Here it is...

Side Table
Side Table
Full Article

Fighting Corporates - Telus Sucks, I Lose

So recently I detailed an ongoing issue I had with Telus. I did not check the traffic usage on Telus' website before the end of my billing cycle unfortunately, but I do know I checked about a week before the end and it was around 500GB, just 40GB more than my counters which is correct due to the changes I made in the beginning of the cycle.

So it is no surprise that I was shocked to get ANOTHER bill for $45 overage (472GB over 1TB):

Telus Overage #2
Telus Overage #2
Full Article

Using Telegraf, Flux, InfluxDB and Grafana to Join Queries

I wanted to achieve a very simple thing - my graph showing CPU usage per core should be normalized. Regardless whether a machine has 8 cores or 2, I want to see the CPU usage as a percentage out of 100. This cannot be achieved using InfluxDB and InfluxQL at the moment as it does not support cross joining of measurements from InfluxDB as sourced from Telegraf, which stores the number of CPUs in the system measurement and the per core CPU usage in the cpu measurement.

Flux is a new language to be part of InfluxDB 2.0, which is currently still in Alpha. A more limited version of Flux is available with InfluxDB 1.7+. It took me quite some time but eventually I figured out how to write a Flux query that will work in Grafana with the BETA Flux (InfluxDB) plugin to create this:

Normalized Per Core CPU Usage - Grafana via InfluxDB
Normalized Per Core CPU Usage - Grafana via InfluxDB
Full Article

Fighting Corporates - Telus Sucks, I Win

I recently had to switch service providers and Telus was the only one offering a fast upload link as well as good download speeds through their fibre optic network. They installed an Actiontec T3200M modem to facilitate the bridge between the fibre optic endpoint and my network. Since this is fibre, they used an SFP based ONT module that plugged in to the SFP slot in the T3200M. This is pretty advanced stuff for residential networking so I was quite excited to see how well it would perform.

I did some speed tests and the results were good - consistently above 300Mbps down and 300Mbps up and fast ping times (sub 10ms). Now my network is fronted with a firewall and that firewall controls everything. Specifically, I turned off the WiFi in the T3200M and use my own WiFi equipment, all routing through the firewall connected to the T3200M. I placed the T3200M in bridge mode to disable its routing and NAT as that would just interfere with my network. Doing this is allowed by Telus, but not technically supported. What I understood from that is when you run into any technical issues doing this, their tech support will not come out and help you resolve issues on the other side of the T3200M. I completely understand this policy as one cannot expect Telus to support custom configurations for residential customers.

So when I received my first full months's bill I was horrified to see an overage charge of $45. They claimed I consumed 1.5TB of data in my July billing cycle.

Full Article