tzeejay icon

About

Archive

Github

The Missing Workhorse Mac

The Missing Workhorse Mac

Processing intensive workflows are higher in demand than ever, but the Mac lineup is missing a crucial piece to help pro customers of Apple.

AVM Fritz WiFi Mesh

AVM Fritz WiFi Mesh

AVM Fritz WiFi Mesh kam unerwartet ist aber ein Segen für den deutschen Markt

Turning iOS Extensibility to 11

Turning iOS Extensibility to 11

iOS is amazing but is lacking productivity basics

iOS' Share Sheet Is Broken For E-Mail

The share sheet that we use daily in iOS was first introduced almost ten years ago, it appears though that enabling users to share basic things like formatted plain text, the one thing E-Mail is most used for, via their preferred E-Mail client is seemingly still an afterthought. It appears that either nobody is able to implement these APIs correctly or nobody is interested to do so. Variables needed to be set via undocumented KVC calls, or by prepending your text with whitespace characters. Obviously these workarounds can be found in Apple’s great developer documentation in which they clearly communicate these shortcomings…. ah no wait I am just joking, actually you can find it in comments under answers to StackOverflow questions.

How is anybody willing to tolerate this shit?

At Guardian we try to make it as easy as possible for our customers to send a support inquiry from our app via their E-Mail account so that they have full control over what information is shared with our support staff. Opt-in instead of opt-out! No hidden data mining!

Guardian Contact Technical Support UI Guardian Contact Technical Support Formatted E-Mail

Every once in a while we get requests from users asking us to support E-Mail clients other than the builtin iOS Mail.app. Some prefer GMail, Fastmail or Outlook and we would like to make it easy for all customers to help us help them. We would like to conform to a system protocol (like UIActivityItemSource but useful) which allows us to set the recipients E-Mail address, the subject line and the body. Setting the E-Mail’s body is the only thing that actually works reliably. Instead we end up with a mess in which the iOS Mail.app supports an undocumented KVC call on the UIActivityViewController instance that you created to set the subject line pre iOS 13, since it is undocumented GMail does not support that but if you add lots of return characters \n to your E-Mail body string GMail will not place your body string into the subject line and I am convinced that other apps have other problems as well.

I do not understand why the three required (yeah technically two, but who doesn’t set a subject) fields in an E-Mail cannot be set programmatically by an app. With iOS 13 Apple added activityViewController: subjectForActivityType: which allows you to set a subject line but in my testing neither GMail nor Fastmail actually support it. Maybe them not supporting it isn’t even intentional, maybe they can’t because Apple also didn’t bother to document how E-Mail clients are supposed to handle this new value. Who knows…

So instead of using system functionality that is always presented to the user in the same way and would reduce a lot of friction on both ends, we now have the choice of supporting no other E-Mail client like we currently do or add a bunch of URL schemes to deep link into various other apps like it’s 2010. What a time to be an iOS developer.

25.03.2021


Fix Apple's GeoTrust APNS Cert Problem

Apple’s APNs (Apple Push Notification service) servers have started to act up last weekend and there was a lot of confusion about it at first. This is a rare occurrence since APNs and iMessage appear to be Apple’s only rock solid server side services while everything else appears to be regularly operated with a staff count of minus one. By started to act up I specifically mean that the certificate which the service has been using could no longer be verified by many servers after a ca-certificates package update went out removing the root CA (little bit of context here). Lots of servers have probably started to show an error message similar to this:

Feb 10 15:53:55 guardian-example-server service.elf[31376]: sendPushNotification(): APNs request failed: Post "https://api.push.apple.com/3/device/<token>": x509: certificate signed by unknown authority

This happened to us at Guardian as well and I only caught it by accident. This lead to none of our Pro subscribers being able to get real time push notifications, which is a feature I had poured a lot of work into last summer to get right and had to reimplement it a couple of times. All of that work was instantly disabled when the certificate was kicked out the trust store on all of our VPN nodes. I did not want to re-install the certificate system wide again, since GeoTrust appears to be not trustworthy and it would have required me to run commands through a SSH session on way too many servers.

So I resorted to being lazy and disabled the TLS certificate verification for that one HTTP request. Every other outbound network connection would still fail, if they were to try to connect to a host which also served a TLS certificate signed by the same GeoTrust certificate Apple will continue to use until March 29th 2021. This was my lazy initial solution to this problem which never made it into production because @chronic instantly kicking me in the butt about it, and rightfully so. This is a bad practise and should not be used in a production environment in 2021!
He suggested dropping the certificate in the typical .pem format onto the filesystem of all hosts and add it to a temp trust store for that one request instead (code below basically only requires a ioutil.ReadFile() if you wanted to do that). This would mean that come March 30th we’d have a file lingering on all hosts that I did not want to be there. So manual upload now, use it for a couple of weeks and then manual removal. Too much potential for human failure if you ask me.
I ended up with a modified approach of what Will had suggested, but instead of reading a file off the filesystem I decided to embed the GeoTrust certificate into our binary since it really wasn’t a lot of data.

This, in my opinion, is the right way to solve this problem until March 29th 2021 in Go, but I am sure all server side languages offer a similar API. It allows you to establish a verifiable TLS connection now, while not jeopardising the integrity of your entire system’s trust store and enables easy removal once Apple starts using the new certificate.

In order to solve this problem the right way I first downloaded the GeoTrust certificate from their website, to which I was still able to establish a trusted connection since macOS still trusts the certificate.

Link to the GeoTrust certificate
I am not including the entire certificate here for a good reason. Verify it for yourself, you shouldn’t trust me!

var (
	geoTrustRootCA = []byte(`-----BEGIN CERTIFICATE-----
	MIIDVDCCAjygAwIBAgIDAjRWMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT
	....
	5fEWCRE11azbJHFwLJhWC9kXtNHjUStedejV0NxPNO3CBWaAocvmMw==
	-----END CERTIFICATE-----`)
)

Add the contents of the GeoTrust certificate as a variable any which way you prefer, it just needs to be accessible to x509.AppendCertsFromPEM() as a byte array.
In this case here, I created a global variable by casting a raw literal string to a byte array and assigning all that to geoTrustRootCA.

	certpool := x509.NewCertPool()
	certpool.AppendCertsFromPEM(geoTrustRootCA)

	transport := http.DefaultTransport.(*http.Transport).Clone()
	transport.TLSClientConfig = &tls.Config {
		RootCAs: certpool,
	}

	httpClient := &http.Client {
		Timeout: 15 * time.Second,
		Transport: transport,
	}
	
	resp, reqErr := httpClient.Do(...)

Here I create a new x509 certificate pool, add only the GeoTrust certificate and then include it for use in the HTTP client’s transport object by first creating a copy of the default default HTTP Transport setup. This http client will allow you to safely make a successful API call to the APNs endpoint for now. This may or may not break the second Apple rolls out the certificate on March 29th so be ready and setup and alarm or something…

13.02.2021


Recovering APFS Data

One of my usual holiday duties is to fill the role of family tech support. This year I was assigned the difficult case of recovering data off of a seemingly broken-beyond-recovery SSD. The MacBook Pro in question was close to 10 years old and I had swapped a Samsung 850 Evo SSD into it a few years prior. The owner mostly lives on his iPhone and does not rely on the MacBook for intensive daily work tasks. The MacBook appears to have had some kind of internal hardware fault which lead to the corruption of the filesytem. After removing the SSD from the MacBook I tried to plug it into my Mac via SATA to USB adapter in order to run through a basic data recovery strategy but I quickly noticed that it was behaving in all kinds of unexpected ways. My Mac instantly recognised the drive itself, but was never able to activate or mount the any partition of filesystem.

It was there and appeared to within reach, but greyed out. I tried running First Aid on the drive and partition via Disk Utility but it kept spewing errors that didn’t give me any hope. After briefly googling I found that not much is available and most existing tools are still in the phase of trying to adopt APFS fully, like Disk Warrior.

After a bit more searching I found this great blog post describing a similar problem. On there a commercial but incredibly shady app was mentioned as well as the OSS, but experimental solution libapfs on Github. I first downloaded the commercial app, scanned the drive and saw that it was able to read the data on drive and reconstructed entire directories in the filesystem. This gave me hope right away. I stopped the scan, downloaded the Github project and started compiling it. The difference between the commercial data recovery app and libapfs was basically trying to read every block on the drive and reconstruct what made sense to it while the libapfs is trying to work around the formatting problems to actually mount the partition as usual so that you could open it via Finder and pull the data off the drive like you would usually do. After going through the fairly complicated compilation steps to setup libapfs it ended up not being able to read the drive which meant that this thing was properly scrambled.

% sudo fsapfsinfo /dev/disk3     
fsapfsinfo 20201107

Unable to open: /dev/disk3.
libfsapfs_container_superblock_read_data: invalid object type: 0x00000000.
libfsapfs_container_superblock_read_file_io_handle: unable to read container superblock data.
libfsapfs_internal_container_open_read: unable to read container superblock at offset: 0 (0x00000000).
libfsapfs_container_open_file_io_handle: unable to read from file IO handle.
info_handle_open_input: unable to open input container.

A few years ago I had to run through the same process after HFS+ on my then brand new 5k iMac decided to mess itself up to the point of no return. All data recovery apps appear incredibly shady and back then I had settled on buying Disk Drill which allowed me to recover all my data as well as not do anything else that I didn’t approve of.
I think out of the few Mac data recovery companies that I have looked at the makers of Disk Drill appear to be one of the least shady ones. Maybe I am entirely wrong about this, who knows. The app works really well and starts out with a quick scan before it really does go through the entire drive in order to reconstruct everything.

Disk Drill Quick Scan

Disk Drill was also able to reconstruct the entire macOS filesytem structure and I was able to walk through the user folders with the owner to recover all the valuable data. It was mostly a few photos and documents like CVs, etc. which totaled at around 10-20GB. He kept insisting that it was fine if he was going to lose the data but I had seen family pictures during the first scan which may not be valuable to him now but may become very valuable to him in the future. My goal was recover absolutely everything.

I do believe that this method is not possible if Full Disk Encryption is enabled, so I would recommend against enabling it if your valuable data only exists on that very Mac. Companies like Backblaze will happily back your data up and store it encrypted for you.
Another option would be to have another copy on an external drive or on a local NAS which automatically backs up your Mac like a Synology. Synology is probably the correct solution for most people as it gives you commercial support if you need it. I personally opted to build my own thing and run FreeNAS which has recently been renamed to TrueNAS Core, and I really enjoy it.

I hope that this little summary of what I attempted in order to solve this problems is going to help somebody out there, even if it’s just an endorsement to buy Disk Drill to recover your data.

31.12.2020


Boring Tech: NOCO Boost HD GB70

NOCO GB70 Image copied from no.co

I am just going to make a bold claim here and say that 2020 was not good for the collective health of car batteries around the world. Unless you have a really fancy car or too much money, the average car battery weights around 20 kg and is some kinds of a VRLA or Valve Regulated Lead Acid Battery. It’s heavy, it’s ancient tech but it’s really reliable and cheap to manufacture. Due to it being ancient tech and quite rapidly discharging itself, as compared to Lithium-Ion based chemical mixtures for example, not driving your car leads to your battery discharging itself to a point where it may damage itself while trying to provide power output. The other very obvious problem is that your car wont start. In order to get out of a potentially sticky situation I bought a battery jump pack in early February and it has come in handy various times already.

There are various brands available that all seem to build quality products but I opted to buy the NOCO Boost HD GB70. It claims to jump start a 8 litre petrol engine or a 6 litre diesel engine 40 times. If you know a little bit about internal combustion engines, the 6 litre diesel number is the more impressive number here. It runs at 12V internal voltage and claims to deliver up to 2000A. Based on the cables attached and the various times I have used it so far I am inclined to believe those numbers. So far I jump started a 1.4L petrol engine various times as well as a 3L V6 diesel once without any problems. The cars cranked over and started immediately after seeing the full 12V, but NOCO gives the disclaimer that some cars may wait 30 seconds before doing anything.

The main reason why I am recommending this to anybody though are the seemingly just-smart-enough-to-be-useful internals. The chips that are connected to the jump leads run through various little checks once connected and make sure that there is no short and that the leads are connected properly to + and - respectively. This means that you can hand this little jump pack to anybody and they would not able to use this to potentially damage the cars electronics or vital components like the starter motor. There is a way to disable all the protective features but you really have to want to turn them off before you get yourself into trouble.

It has a regular ol' 5V ~2.1A USB-A port which can be used to charge your phone or anything else that can be connected via USB as well as a 12V out connector. You may not be as familiar with it but it is a commonly used plug in the automotive industry as well as in workshops for various things. I have a soldering iron for example that runs via a 12V plug like this.
The only downside about this product and why I want to explicitly mention it here is the Micro USB which is used to charge the jump pack if you don’t go for a 12V fast charger. I never liked Micro-USB, but this model appears to have been on sale as is for a couple of years and even though I’d much prefer to have USB-C port I can see why it’s on there.
The jump pack also features a very bright flash light at the front which can be run in various modes from just lights on continuously to a few flashing modes like SOS to a police siren like flashing.

I really like that the clamps are made from what feels like high quality plastics and have just sharp enough jaws on the inside for the clamp to grab onto reliably without damaging anything.
I chose this jump pack in particular because it has the leads permanently attached, but it may be overkill for your needs and there are smaller and cheaper versions available for reasonable prices. Maybe this one is perfect for you, or another version may suit your needs better. I’d recommend to have a look around on the NOCO website. I am usually not the “prepper” type of person, but I believe that owning something like this is very cheap insurance especially now as the colder climate has clearly arrived in the northern hemisphere.

13.12.2020


Boring Tech: HP Color Laserjet Pro M479

HP Color LaserJet Pro MFP M479dw Image copied from hp.com

I do not print a lot. I do have to keep track of invoices, print them and file them but that is not my main job. My day job is to push bits and bytes around, organise releases and keep the Guardian Firewall Infrastructure online. So when I do end up having to print, scan or file something I want it to happen as quickly as possible. Any kind of downtime is just going to further distract me and we all know where we all eventually end up when that happens: Twitter.

I used to use a HP Color Laserjet Pro MFP M176n and while it was a solid printer over all and it served us well until the day we unplugged it, it had a major downside which was speed. Scanning was slow, printing was slow and interacting with it in any other way was slow. It retired and was given to my sister and her boyfriend who very rarely have to print or scan anything but it being a solid laser printer I am convinced that it will serve them well since it wont clog or is unlikely to fail in another way.

Heat is required for laser printers to operate, which I believe is also true for ink based printing systems so there is usually a significant delay from standby to printing the file page. The HP Color Laserjet Pro M479 though will print from colder-than-your-ex standby to the first page, which is 95% of my printing jobs, within two to three seconds. By which I am not referring to it starting the process of pulling in the first page to kick off the printing process, I am referring to the first page comeing out at the top exit with the ink in all the right places.
It is a magical feeling when upgrading from a printer that would easily take 10 to 15 seconds to do the same thing. This also applies to scanning, which I usually do more of. The old printer would have to boot up everything for lack of a better word. If I wanted to scan something it would also start the print preheat cycle which would block the scanning task for more than 5 seconds. That isn’t a long time, but just long enough to be annoying.
The scanning task happens entirely separate from anything related to printing and from me launching a little Mac app to scan, to it connecting to the printer via AirPrint takes about a second or two.
The last really big upside in my opinion that this class of printer brings is the document feed at the top above the flat bed scanning unit. You can drop a stack of pages in there, hit scan and it will pull each one in and scan it one after another. This has got to be my favorite feature of this printer.

In my case the printer is connected over Ethernet, comes with all kinds of fancy and not so fancy tech though like built-in WiFi, Bluetooth Low Energy, Apple iBeacon and various cloud printing things from Google, HP and others. It also has a little web UI in which various settings can be adjusted. I used it once to turn all of the previously mentioned “features” off and it worked great for that. The printer has been running since early August and if I recall correctly I had to reboot it once to fix a problem. It has otherwise been mostly in standby and has reliably woken up instantly to do the task it was asked to do.

I regularly joke that it has been the single best purchase of the year. This printer does it’s job reliably and otherwise gets out of my way but this kind of convenience comes with a real drawback and that I cost. I paid 348,00€ for the printer which I believe to have been a mistake since other online retailers had offered the printer for roughly 450€ in August. HP in it’s own online store still sells the printer at 457,18€ which is discounted from 466,92€. The next cost downside is running cost. Replacing all four colours at once with OEM HP refills costs almost 400€, they do last a very long time though. The manufacturers usually break the cost down to a cost per printed page so keep an eye out of that. In my case it turned out that even though I would be spending more at once the cost per page was lower as compared to the only other option I was interested in buying. To me the convenience and overall pleasure to use this printer is well worth the tradeoff.

If I was asked to recommend a printer/scanner 2-in-1 today I would recommend this very printer with no hesitation to anyone and if it would break I would order a new the second I knew it could not be fixed. It is that good.

12.12.2020


A Few Notes About macOS CI

Update 2020-12-03 17:26

After searching a little and with help from Matt I was able to find the resource that stated that Github Actions is using MacStadium. The current version was wiped of that information but thanks to the Internet Archive I was able to find the actual information written out and disclosed by them.
If you haven’t yet I would recommend to donate to the Internet Archive, I just sent them $25,85


Following the Amazon AWS announcements that they will be joining the circle of few to offer Macs in their datacentres the topic around Mac hosting, macOS CI and which provider to pick has been widely discussed once again. The current overlap with the release of M1 based Macs and the remaining only half-to-poorly answered question of how to virtualize another OS on these Macs lead to various very interesting blog posts and public conversations on Twitter. Peter Steinberger sent me a DM asking me to fact check his article, and I figured I do this in public for others to read as well. I think he nailed pretty much everything but I wanted add a few things, starting by addressing his conclusion.

There’s no one-size fits-all solution when it comes to running macOS in the cloud. Both virtualization technology and bare metal are valid choices depending on organizational structure and requirements, but we hope this has given you a good overview of what’s possible.

He is absolutely correct here and missing critical information at the same time. There is no one-size fits-all solution to this, but there will always be one reason to choose virtualization: ephemeral builds.
These two magic words make anybody dealing with CI/CD infrastructure very excited for a very simple reason, which is predictability. It’s something our industry has seemingly been chasing for decades now and thanks to modern container technologies this goal is within reach or has been partially reached on the Linux side of things by the big CI-As-A-Service providers. Ephemeral builds means nothing more than always starting with the same environment. It provides a clean, predictable environment for your tests to run or your software to be packaged up and deployed. No lingering artifacts, crashed simulators or other things should exist which could disturb your fragile CI/CD pipeline. On the macOS side of things it’s our Linux on the Desktop. Next year will be the year, I am certain this time.

Virtualizing macOS

The way to achieve ephemeral builds “fairly” easily is by virtualizing the OS. A new VM is created every time a new build is started based on a copy of an environment that was setup specifically to your needs before. I know that this setup is possible with VMware vSphere as well as with KVM (MacStadium Orka). I helped maintain CircleCI’s VMware setup and build my own crazy little setup with KVM prior to the release of Orka, though I hadn’t used it for long. I think this would be possible with Veertu Anka as well but I have not tried it (yet).

A lot of MacStadium customers were interested in having this exact setup but had no idea how to get there or how to maintain it and ended up using virtualization without an ephemeral build system. They basically gave up before they even got there or a few months into it. The answer as to why mostly comes down to missing tooling, especially from Apple. You are playing Mac admin on extra hard mode and if that isn’t your day-to-day job it may be very hard to find the motivation to keep things running.
I had only started working on it when I left MacStadium and got very distracted since, but I am still absolutely convinced that ephemeral builds are possible on a bare metal system with APFS snapshots. Maybe I am wrong about it, but in theory this should be possible. If you know more I’d love to hear from you!

Bugs

Virtualizing macOS will always present you with weird bugs since you are interacting with macOS in a way which it absolutely hates. The OS fundamentally wants to be operated by a human with a keyboard and a mouse attached to it, not by various automation tricks and scripts. Look at the state of macOS Automation and tell me that I am wrong. The guy that ran that entire thing for decades was fired (if I recall correctly) and left to join Omni and do cool things there. It perfectly shows that Apple’s leadership has no idea what to do with their best talent even if they already live in the Bay Area and have been working for them for years knowing all the ins and outs of how to navigate Apple internally.

Nested Virtualization

Just don’t. You will be so much happier if you just don’t do that.

Price

From Peter’s post:

I haven’t found a single writeup that takes price into consideration when discussing macOS virtualization. This is in some ways understandable, as most articles are from large companies, and engineers aren’t included in their price decisions. However, for smaller teams without venture capital, it’s an important metric.

Total cost is difficult to measure, since the promise of virtualization is less ongoing work, which should translate to reduced ongoing maintenance costs, and often employees are the most expensive cost factor.

Price mostly comes down to what your time is worth to you and/or whether or not you can find somebody capable and willing to maintain this infrastructure for you. Can you find somebody if that person leaves your company? What does it take to retain that talent and is there room for personal growth in this area?
To give you a real world example of my own, after giving a talk at Otto, which are a MacStadium customer, about exactly this topic I was approached by engineers from 4 different companies within 15 minutes asking if I could maintain their infrastructure for them as a contractor. There is a very good reason why this market is as underserved as it is.

Comparing Anka and Orka

The biggest (potential) upside to me with Anka is that you have the ability to turn all your VMs off and run a big build on bare metal macOS. This would require a sort of CI runner setup like Gitlab had for a long time or Buildkite and a sane CI tagging system. Maybe this would even be a way for some companies to run Android builds on the same host and get around nested virtualization, because after all I can’t stress enough how you should absolutely not try nested virtualization.

Fully Managed Services

TravisCI runs on machines at MacStadium and so does Github Actions. This was previously disclosed publicly by Microsoft when they ran their weird CI system but I would bet a lot of money that all of that has since been rolled into Github Actions after the acquisition.

There is chatter about the changed macOS EULA and how this relates to these services and my uneducated guess is this: They aren’t going to go anywhere. Companies will continue renting entire machines and will offer build minutes and Apple will keep looking the other way. Maybe Apple will ask for some changes but the fundamental services will not go away.

Pricing

All of these are expensive for various reasons, the biggest is the manual labour involved in getting hosts online. AWS is not going to bend over backwards to get you anything and is not solving any of the hard problems with maintaining Mac build infrastructure.

What about the Mac Pro?

While I adore the Mac Pro and the solution Apple came up with to sorta-kinda-resurrect the Xserve, I think the underlying product is too expensive. If you don’t need the extra RAM or really know what you’re doing with a virtualized setup I would not recommend it. My idea of leveraging Anka comes into play again, but I have not tried this yet or compared any numbers.
The current Mac Pro is overly expensive and over engineered in ways that make it a very bad fit for datacentre use. I am assuming that the 19" rack mount version only exists for the music and video industry and that Apple thought little to none about it being used as a CI machine or living in a datacentre. In the words of a friend of mine at Apple about the Pro Display XDR: “This is not for you”.

This announcement from AWS left me a little confused. MacStadium has great compliance certifications so I doubt that most customers were waiting for AWS to join this space for these reasons. The prices are also outrageous and individual support probably a lot worse as compared to MacStadium. Unless you have to go with AWS for whatever reason, I see no point in signing up with AWS because as usual, they offering isn’t better than anything else by other vendors but they charge a 10x premium on it. I would recommend that you run macOS bare metal on Mac minis hosted by MacStadium if you can and use Buildkite to automatically kick of any builds. At that point you still haven’t solved any of actually hard problems of dealing with macOS and CI but I am sure that Peter will keep posting interesting things.

03.12.2020

Archive