Processing intensive workflows are higher in demand than ever, but the Mac lineup is missing a crucial piece to help pro customers of Apple.
AVM Fritz WiFi Mesh kam unerwartet ist aber ein Segen für den deutschen Markt
iOS is amazing but is lacking productivity basics
I think I should start out with a few disclaimers:
Great, so now that we have this out of the way I can to tell you that Packet currently has a cool promo going for their partnership with AMD. You can apply for a 250$ credit and test the c2.medium featuring 24 cores (48 threads) of AMDs latest EPYC CPUs. I saw the announcement a while back in Packets newsletter and after not being able to think of something to do with 250$ of free compute I decided to try to run macOS on these machines.
Thanks to KVM and QEMU being incredible pieces of software this endeavour was not hard at all. Having never required a VNC connection from a headless server I wasted most of my time fighting the VNC connection on the server in order to kick off the installation process.
I would say that it is reasonable to say that you could do this at home, pretty much for free in a couple of hours start to finish so I tried to outline it as best as I could for you.
Things you’ll need:
If you want to take advantage of the 250$ promo you’ll need to fill out this form for Packet & AMD. The peeps at Packet are all super nice & hard working so there is a chance that you’ll be granted the credit quickly. Once granted, login into your account & create a new server in one of the available locations. As of writing those are AMS1, SJC1, EWR1 & NRT1.
The important bit when selecting the setup is which Linux distribution to use. I highly recommend using Ubuntu 18.04 LTS since it already comes with all the right versions so you wont have to compile QEMU from source or something. I’m not good enough with KVM or QEMU to explain why exactly, but having tried older LTS versions of Ubuntu and other distributions I can tell you that it’s an absolute pain in the ass.
Waiting for your machine to finish being deployed may seem like an eternity compared to deploying a VPS’, but working for a company that offeres dedicated Macs which allow for basically no automation at all I can tell you that waiting 5-8 minutes for dedicated hardware is blazing fast. Have a quick walk, show your dog a little love or go and drink some water. It’s good for you!
Once your machine is up do the usual dance of grabbing your dependencies and updating whatever needs to be updated. You know the deal.
After that’s done run
$ apt install qemu uml-utilities libguestfs-tools git
to install QEMU itself, a few dependencies and git since we need to clone a Github repo. The repo is maintained by Dhiru Kholia but there is no support in any way. Not even an issue tracker. It features a bunch of useful scripts, a ready to go Clover image and lots of very useful information. I recommend starting in the README for High Sierra. It features plenty of information and this post is mostly a rehash of said README.
You can grab the repo with
$ git clone https://github.com/kholia/OSX-KVM.git
cd into the repo and create a new virtual HDD for macOS
$ cd OSX-KVM
$ qemu-img create -f qcow2 mac_hdd.img 120G
Don’t be afraid to change any of the names, you just simply have to keep track of them and change the boot script and the end.
The image file that was created is the recommended file format for KVM
qcow2, though it can handle a lot of different formats, and is 120 GB big which should hold plenty of things like Xcode, your source code and any dependencies necessary to build your app.
As I said before the repo comes with a pre-built Clover bootloader image, also in the
qcow2 file format. It’s resolution is 1024x768 which is absolutely fine for a CI setup. I recommend using the pre-built Clover image and moving on for now.
Once you finished all of the tasks on your AMD EPYC machine you will have to build your macOS .iso on your Mac.
First download the macOS installer .app from the Mac App Store that you’d like to run, in my case High Sierra. The .iso can easily be created by running the
create_install_iso.sh script in the repo.
The output of that shell script should be an .iso file in the range of 5GB which needs to be copied to your AMD EPYC machine with Packet, for example (the colon at the end is required)
$ rsync -P /path/to/install_macOS_High_Sierra_10.13.5.iso email@example.com:OSX-KVM/
After the transfer finished we could boot straight into macOS without any issues, though we need to sightly modify the
boot-macOS-HS.sh script to add the slowest, but easiest to setup networking & remove a few audio devices that give you nothing but trouble, even on a real Mac.
I also set the virtual CPU to
-smp 8, cores=4 & the virtual RAM to
-m 8192 which equals 8GB. The virtual CPU configuration is still a mystery to me but this setup gives you plenty of speed and should allow you to run 5-8 VMs on this AMD EPYC machine (probably more if you’d like to overprovision your machine a little).
Please find the line with
file=./'HighSierra.iso' at the end and replace the HighSierra.iso string with the name to the .iso file that you generated and copied over.
Once that’s done you should be able to boot the VM
I was not able to directly connect to the VM via VNC so I forwarded the port 5900 that I specified in the
boot-macOS-HS.sh script with the trailing
:0 via SSH
$ ssh -L 5900:localhost:5900 firstname.lastname@example.org
After that I only disabled password authentication in Screens and was able to connect to the VM right away.
Below are a few screenshots of the boot & installation process.
I’m not a good photograph. I wouldn’t ever call myself that, but I’m starting the get the hang of it and at times & writing stuff down or spelling it out sometimes give one some kind of epiphany.
Within a couple of weeks I first rented a Canon 5D Mark IV for WWDC 2018 and then a Canon 1DX Mark II. Both cameras deliver incredible images and I really enjoyed shooting & interacting with both cameras. My brain works very well with Canon cameras and they allowed me to learn more about photography. I had to actively think about what I was doing.
The thing that made me come to this realization though weren’t the cameras themselves but rather the prime lenses that I brought along and forced myself to use. When starting out with getting a bit more serious about photography prime lenses enable you to change less things at once and focus on the basics. You can take the same shot over and over again with different settings (aperture, ISO, shutter speed, etc.) or you could also take different shots with the same settings. If an image does not look good it’s your fault. Your skill directly translates into your image’s quality. Switching one side up of the equation really gave me a good understanding of what the camera is actually doing. I was able to get a good understanding of depth of field, what the aperture stops (f/ numbers) & ISO values on the cameras display actually mean to me and my photos.
Whilst shooting with the Canon 1D-X Mark II & and a Canon 50mm f/1.2 prime lens it finally clicked for me. I was trying to get my girlfriend and the puppy to appear sharp in a single shot. I was shooting other things at f/1.2 before and hadn’t given it much thought. While struggling to get the camera to focus I noticed that the lens was far too wide open to allow for both of them to be in focus. Having the lens that far open gives you incredible portraits with beautiful looking bokeh in the background but it also gives you a very shallow depth of field. Only things at a certain distance from the lens will appear sharp and in focus in the image. Too close or too far away and it’ll be blurry. At some point I noticed that I was picking a fight with physics and not with the auto focus mechanism.
I really enjoy bokeh. It’s great and I think a lot of people love it. Bokeh is an incredible photographic effect which enables you to let the viewer only focus on the subject in focus. The brain kind of automatically ignores the blurry background surrounding whatever is in focus. Only focusing on getting as much bokeh as possible wont give you the best images though. I had to learn that the hard way and I now have to keep reminding myself of that.
As a little exercise to better myself I picked an elder flower, held it in front of the lens and shot a few photos with different apertures. This way I was able to better understand the effect that I was playing with. I will do this again in the future with different subjects & settings. Hopefully this way I will be able to train my eye to look for the angles that work with bokeh vs. ones that don’t work.
I think similar types of exercises will also work for different effects and will ultimately allow me to take good photos most of the time, or don’t even try to get a certain shot because I know that it’ll be impossible.
No one is a born master. I certain I will never fully master photography but I will keep shooting and trying new things in order to better myself.
Nach wirklich langer Zeit ohne jegliche Probleme erhielt ich natürlich mitten meiner zwei wöchigen WWDC Reise in die Bay Area die Nachricht von meiner Mutter das deren WiFi nicht funktionieren würde. Dies passiert absolut immer wenn man sich wirklich gestresst am anderen Ende der Welt befindet und nie wenn man mal zwischendrin etwas Zeit hat.
Nachdem ich also am letzten Wochenende wieder in Hamburg eingetroffen war, habe ich mich ran gesetzt und geschaut was Sache war. Nach einiger Zeit des grübelns stellte sich heraus das der in die Firtz!Box 7490 integrierte DHCP Server nicht mehr zu gebrauchen war. Einen Grund hierfür konnte ich nicht ermitteln da die wenige Menge an Logs im Gerät in keinster Weise aufschlussreich waren. Nach längerem aus und wieder anschalten mehrerer Funktionen und Geräte im Netzwerk beschloss ich neu anzufangen und ich kann dies nur wirklich jedem empfehlen der sich in der gleichen oder ähnlichen Position befindet. Es dauert wirklich lange aber es ist immer noch eine schnellere und zuverlässigere Weise das Netzwerk zu retten.
Nach dem zurücksetzen der Fritz!Box 7490 meiner Eltern auf die Werkseinstellungen fing ich an das Mesh Netzwerk neu aufzubauen. Zu meiner Überraschung wollte die Fritz!Repeater 1750E sich allerdings nicht mit dem neuen-alten Netzwerk verbinden. Nach langem kämpfen stellte sich heraus das man die Repeater auch auf die Werkseinstellungen zurück setzen muss bevor diese wieder in der Lage sind sich erneut in ein Mesh Netzwerk zu integrieren.
Das Mesh System von AVM hat meiner Meinung nach immer noch deutliche Ecken und Kanten da aber sehr fleißig daran gearbeitet wird, denke ich das diese Problemzonen über die nächsten paar Monate immer weniger zu spüren sein werden. Für den Moment sind wir dennoch gezwungen uns selbst ein wenig zu helfen. Die Repeater ließen sich mit einem langem drücken der WPS Taste auf der Vorderseite in den Werkszustand zurück setzen. Beim integrieren eines Mesh fähigen Geräts in das Netzwerk sollte man allerdings aber beachten das die von anfang an gewünschte Technologie gewählt wird mit welcher das Netzwerk erweitert werden soll. Dazu besteht die Möglichkeit eine WiFi-Brücke oder eine LAN-Brücke zu errichten. Da überall im Haus meiner Eltern CAT 7 liegt und ich somit die WiFi Frequenzbereiche nicht blockiere mit inter-mesh Netzwerk Kommunikaiton wählte ich erneut LAN-Brücke. Nachdem all dies geschah war das Netzwerk wie zuvor wieder stabil.
Ich denke das AVM sich wirklich klar werden muss das die Software der Mesh Technologie gerade auf den Repeatern wirklich besser werden muss um gegen Firmen wie Eero oder Ubiquiti ankämpfen zu können, welche aktuell bessere hardware und software anbieten.
Repeater müssen sofort in der Lage sein anderen Netzwerken beizutreten ohne erst die Werkseinstellungen zu laden und unter keinerlei Umständen sollte ein einziger daemon ein ganzes Netzwerk zu Fall bringen. Des weiteren bin ich auch weiterhin davon überzeugt das das physische Betätigen mehrer Knöpfe an verschiedenen Geräten absolut user unfreundlich ist. All diese Prozesse sollten im web interface gestartet werden (wie es auch beim DECT geschieht) und durch das physische Gerät bestätigt werden.
Ich denke AVM hat eine ganze Menge zu tun in der Zukunft und ich hoffe das sie auch weiterhin in Mesh investieren da die Technologie ansonsten sehr gut funktioniert.
In 2013 Apple introduced push notifications services (APNS) for Safari but so far the feature is only enabled on macOS even though the underlying code powering Safari on macOS and MobileSafari on iOS is both WebKit. I noticed this right away after watching the WWDC session video and was a little confused about it since the entire APNS infrastructure was specifically built for the release of iOS 3 in 2009.
As with every nice thing on the web, the initial idea may seem novel at first but will always end up being abused non-stop by “growth hacker” around the world. This is probably the reason why 5 years on iOS is still missing this feature. The WebKit team was probably already fearing this horror and didn’t want people to ruin the new platform. Maybe this order even came from a time when Steve Jobs was still alive, we can only speculate.
If used tastefully though and with explicit consent of the user one could imagine push notifications for websites being quite useful on iOS. Often times I find myself having to download an application only to gain this one feature I care about: the immediate flow of information.
With downloading these bloated piles of bad code though I then have to waste cognitive capacity on hiding the app in a folder titled “Trash” or “Sachen” in order to not clutter up my neatly arranged homescreen. To make things even worse these kinds of apps are more often than not just native wrapper around a website which scroll in funky ways, tend to always take up at least 100 MB for no damn reason, seem to break native behaviors we all expect and are generally very slow compared to native apps in every measurable way.
Looking at you Discourse, Amazon and various banking apps on my phone.
My bank thinks that this an acceptable iOS application (the website is just as terrible). I was very vocal about their tech in my last meeting with my families advisor about it. He was not happy about getting so much shit from me which is understandable, but it really is bad. pic.twitter.com/s53TSNaAZI— Constantin Jacob (@tzeejay) 13. Juni 2018
Thanks Amazon web app in an iOS app 🙄🖕🏻 pic.twitter.com/LtQCQSwCzf— Constantin Jacob (@tzeejay) 13. Juni 2018
What if there’d be a way to get push notifications on iOS using already existing technology in the WebKit source in order to keep up with things on the Swift Forums or an online store you never shopped at before or will never shop with again any time soon? As with anything on the web, the sky is the limit.
I generally believe that this could be a great feature for a lot of users and even for organizations who simply don’t have the time or resources to develop a good native, mobile experience. Time and money are very good reasons for any organization to not commit to another platform but I’d still wish for a way to have third parties pass information along that I care about. Information is power and enabling third parties to easily empower it’s users with information that they care about is the right thing to do here in my opinion.
A lot of emphasis on “information they care about” is required here though, since the current push notification system for the web on macOS is being abused to the point where the WebKit team had to add a checkbox to Safari to never allow any website to send push notifications at all. This is not great in my opinion and is a last resort to allow users to defend themselves against modal pop ups immediately on almost every major website in order to increase “engagement”.
“Never trust statistics you haven’t counterfeit yourself” applies once again I guess.
A more relaxed permissions model on the web compared to iOS or macOS is the reason why people find these very useful features so gross on the web but allow any app access to almost everything without even thinking about the implications in native apps. People like Will Strafach are trying to combat these bad actors on the App Store. Native apps take all kinds of your data and send it to all kinds of places all the time. Will is bringing the heat to these data mining agencies and it’s enablers the app developers by spreading awareness amongst their users and will in the future outright block data from being delivered with the help of an app he is working on. He is only getting started and I can’t wait to see if he’ll be able to put some of these awful companies out of business entirely. One can only hope.
What I would like to see the the WebKit team implement is a permissions system more alike to the one on iOS for native applications, without separating themselves too far from the standards. Have very clearly defined rules in plain english (no lawyer-lingo) publicly available to anyone on https://developer.apple.com and only enable these features for websites that opt into your rules with things like signed certificates or entitlements like on iOS. Websites that abuse the services could have their certificates put on trial after the third unique report by a user and put onto a watchlist with automated testing to verify the claims. I would like to see the WebKit team to be very strict and defend their privacy and usability highground like they recently did with ITP 2.0 and their incredible writeup on HSTS Abuse.
I want all of this because I know that they are the right about all of this for the web’s sake and I know that the WebKit team can take all of it because they’re usually right in the long run. WebKit may adopt certain features slower compared to Chrome or Firefox but at the end of the day they always have the users best interest in mind not anyone else’s best interest. After all a lot of people don’t mind giving up a lot of their data in exchange for a better service but none of it should be allowed to happen without the users understanding and the resulting explicit consent.
I will support them and I know many others who will too.
Brent Simmons post on his inessential.com about sharing code between iOS and macOS made a lot of sense to me and I’d like to double down here.
Modularity goes a long way for a lot of things and writing code is not excluded from this. A little extra work upfront may allow you to branch out into platforms in the future that didn’t yet exist or you didn’t anticipate at first. A good credo to follow here is “Don’t repeat yourself”. Writing little helper classes that do common tasks for you will pretty much work on any platform (watchOS & tvOS included) and will make you a better developer. Callback functions are great, cheap and once you get used to the syntax (fuckingblocksyntax.com is your friend) you’ll have modular code in no time. I would argue that anyone writing iOS apps today would be able to move and tweak their existing business and networking logic to an internal library within a couple of days.
Assuming there’s a data model, maybe a database, some networking code, that kind of thing, then you can use that exact same code in your Mac app, quite likely without any changes whatsoever.
That leaves the 20% or whatever that’s user interface. AppKit is not the same as UIKit, but it’s recognizable. Same patterns and concepts, and often similar names (UITableView/NSTableView).
Simply copying over the class files will get you up and running in literally no time. That leaves you only having to worry about AppKit. As someone who has learned UIKit first and hasn’t done much with AppKit I’d say that it’s harder to pickup for sure but not as impossible as some people make it out to be. You have to become familiar with managing multiple windows and the tooling around it while trying to stay productive. One thing that Brent completely disregards though is that UI work is maybe only 20% of the work but appears to always take 80% of the time, which means that developers would have to spend 80% of the time developing the Mac app dabbling with technologies they’re unfamiliar with. This may be intimidating to some.
The lacking advancements from Apple in regards to tooling isn’t what has once again sparked the Electron vs. native Mac apps debate though. The bigger issue here is rooted amongst the upper management in my opinion. Jonathan Wight pretty much nails it in my opinion:
The success of tools like Electron is almost entirely based on the (almost always broken) promise of “write once, run anywhere” and has nothing to do with tooling whatsoever.— Jonathan Wight (@schwa) 25. April 2018
Watch company leadership’s eyes light up when you tell them they’ll only need to hire web developers…
A lack of interest in providing a good experience on all platforms is the real issue that we’re facing.
Ten years ago I thought that all the new iOS developers would translate to lots more Mac developers. That that didn’t happen is a huge surprise to me. Because if you’re an iOS developer you’re practically a Mac developer already.
I think Apple is partly to blame for this. The company is famously not very good and doing multiple things well at once. They seems to always focus on one thing in particular and let other things sit for quite a while until it’s in real trouble, like the Mac currently is.
Look at all the apps that are native on iOS. Wouldn’t it be great to see these companies put a little bit of work into splitting up their business logic from their UI code and “porting”, for lack of a better term, their app to macOS?
How great would a native Netflix or Amazon Prime Video app be on macOS that would let you download shows for offline consumption like on iOS? Or how great would a native Instagram macOS app be that would let “influencers” really boost their business, post engaging content more easily and allowed them to upload professionally edited “social cuts” of promo videos without having to copy the videos over to an iPhone and uploading them to Instagram there?
There are many things that would make macOS a better platform if a few developers spend a little bit of time on making their apps more modular.
According to Wikipedia’s entry for CI the term was first coined in 1991, three whole years before I was even born, but only in the last couple of years have iOS developers across the board really started to adapt these processes.
The increased popularity amongst iOS developers has created a previously unimagined market and gave big CI/CD companies like CircleCI or TravisCI incentives to add support for automated builds in macOS environments. The demand has driven third party developer tooling innovation heavily and created whole specialized infrastructure companies like MacStadium.
Apple on the other hand though hasn’t kept up with the growing demand. Both software but more importantly hardware support for the increased computing need has been weak in recent years.
The last real server-hardware from Apple has been the Xserve which was terminated in 2009. No new powerful in every way (CPU, GPU, networking & storage), meant to be run in a datacenter for computing intensive tasks hardware has been released since. Employees internally at Apple but also third party developers and other customers with high intensity computing needs on macOS are stuck with only bearable hardware in less than ideal configurations.
Neither third party companies like MacStadium nor Apple’s own infrastructure teams are able to offer a good solution with enough computing power especially to developers. The Intel Xeon E5 v2 processors in the currently sold “trashcan” Mac Pro were first released in 2013 and really don’t stand a chance against newer Skylake-E or Kabylake-E Xeon processors. Increased efficiency, faster processing speeds and higher CPU core counts lead to current entry level Xeon processors outrunning top end models from 2013.
I’m not certain how Apple builds it’s operating systems, but I can imagine that these build jobs take quite a while to finish, unless they use their own specialized hardware for it.
The need for a proper solution is long overdue by Apple to satisfy the ever growing processing need like the recent CI/CD trends across the world. This isn’t about having something to please the “Enterprise Market” anymore, but rather about becoming aware of a problem that slowly grew over the years. Many market segments, especially so called “Pro users”, prefer to use macOS because it’s simply the more capable and efficient OS for their tasks, even on old hardware. Companies like Imgix built custom sleds, similar to the ones MacStadium developed, in order to incorporate Mac Pros into their datacenter since they do all the actual image processing on macOS. They go through all of this hassle in order to fully leverage the strengths of macOS while offloading other tasks onto Linux based servers.
Apple doesn’t have to release something that has to compare to Dell or Supermicro hardware, a Linux powered cloud but rather it has to solve the issues it’s customers face since those issues can only be solved by it’s Mac hardware monopoly. Apple has to provide as much processing power to macOS as possible, in the form of supported hardware.
Since I pretend to already know what I’m talking about let me prove it to you and talk some specs:
In order to really solve these issues Apple needs to release something that massively increases computing density in the server racks. The Xserve did that very well and also looked brilliant doing it.
Just look at it. They’re gorgeous!
What is needed hardware wise is a sort of reborn Xserve. In a perfect world that would be a 1U or 2U 19” rack mountable server that comes as a baseline equipped with:
This configuration may sound awfully similar to what Apple has recently released in the form of the iMac Pro, but the issue with the iMac Pro is that it comes sealed, non-user upgradable in a non-standard formfactor. All these traits make it undesirable for datacenter use.
Placing similar, or even the same, components in a maintenance and upgrade friendly standardized “box” would make it very desirable to run in a datacenter. Datacenter staff would be able to service components in the event of a hardware failure or in case hardware upgrades are available in the future.
Add-on options on a rack mountable reborn Xserve could be dual Intel Xeon E5, dual GPUs and more onboard storage. Dual 10GB NICs on the back are probably sufficient for most customers but extra space in the chassis for PCIe 3.0 x8 or x16 extension cards would allow for internal fiber channel cards for SAN connections or if desired even more Ethernet ports. This allows for a certain degree of flexibility and would make it more interesting to more customers and their varying workloads.
But cramming lots of computing power into a standardized format and calling it a day isn’t enough on Apples part in my opinion. Software development on Apple’s and related platforms have matured greatly in the last couple of years but Apple doesn’t seem to be intersted in it or are unable to keep up.
More and more things are meant to be written into configuration files of some sort similarly to Dockerfiles or YAML files filled with instructions for CI/CD systems. The driving factor for this change is that these configuration files can then be monitored through a VCS like Git while also guaranteeing infrastructure flexibility and reproducibility.
Dev vs. staging vs. production. Sounds familiar?
iTunes Connect still has no REST API or other interface to automate tedious work away (Fastlane does some but abuses a lot of private APIs along the way. Also: owned by Google).
Xcode’s developer tooling is lacking behind third party vendors tooling like JetBrains’ AppCode. I mean c’mon… Apple own platform developers preferring to work in a Java based environment to write Swift or even Objective-C makes me really really sad and reading things like this sure doesn’t make it better.
“Nobody inside Apple uses Xcode the way people outside do”— [renaud lienhart]; (@layoutSubviews) 15. März 2018
Yuuup. After leaving Apple, I instantly started missing all the tooling/resources that Apple uses internally. The company doesn’t realise how terrible the outside ecosystem is.https://t.co/VjhkOe0VDU
macOS either wont let you automate things like OS setup (anymore) that would allow developers to create repeatable ready to go installations of macOS (poor mans Docker if you will) from shell scripts or command line tools end up triggering modal alerts in the UI for no apparent reason.
All these things are just the tip of the iceberg that make developing for Apple’s platforms a royal pain in the ass every day, but people still do it for one very simple reason: One is able to make a living doing it. Plain and simple.
I’m personally not asking for something like adding cgroups to the XNU kernel in order to really support Docker natively on macOS but I think that Apple could make a huge dent in the lives of it’s developers (big and small) with a little bit of course correction and dedication to shipping quality products to it’s most loyal and influential customers again.
Apple lives and dies by it’s community of strongly opinionated know-it-all (I include myself here) pro customers who do care a lot about the things we use and who do care about Apple and it’s decisions.
As always with Apple it’s hard to predict what’s going on on the inside. WWDC may bring a massive course correction that no one anticipated but for now they appear to be out of touch with it’s developers and what softare development looks like around the world.
I certainly believe that they will do the right thing in the end, it might just take a while to see the light of day. In the meantime we can make a little fun of them hoping that they’re a good sport about it.