Welp no change. I’m guessing the motherboard firmware already contained the latest microcode. Oh well, was worth a try, thank you.
Welp no change. I’m guessing the motherboard firmware already contained the latest microcode. Oh well, was worth a try, thank you.
This sounds like my best shot, thank you.
I’ve installed the amd-ucode
package. It already adds microcode
to the HOOKS
array in /etc/mkinitcpio.conf
and runs mkinitcpio -P
but I’ve moved microcode
before autodetect
so it bundles code for all CPUs not just for the current one (to have it ready when I swap) and re-ran mkinitcpio -P
. Also had to re-run grub-mkconfig -o /boot/grub/grub.cfg
.
I’ve seen the message “Early uncompressed CPIO image generation successful” pass by, and lsinitcpio --early /boot/initramfs-6.12-x86_64.img|grep micro
shows kernel/x86/microcode/AuthenticAMD.bin
, there’s a /boot/amd-ucode.img
, and an initrd
parameter for it in grub.cfg
. I’ve also confirmed that /usr/lib/firmware/amd-ucode/README
lists an update for that new CPU (and for the current one, speaking of which).
Now from what I understand all I have to do is reboot and the early stage will apply the update?
Any idea what it looks like when it applies the microcode? Will it appear in dmesg
after boot or is it something that happens too early in the boot process?
Everything is up to date as far as I can tell, I did Windows too.
memtest ran fine for a couple of hours, CPU stress test hang up partway through though, while CPU temp was around 75C.
Motherboard is a Gigabyte B450 Aorus M. It’s fully updated and support for this particular CPU is explicitly listed in a past revision of the mobo firmware.
Manual doesn’t list any specific CPU settings but their website says stepping A0
, and that’s what the defaults were setting. Also I got “core speed: 400 MHz”, “multiplier: x 4.0 (14-36)”.
even some normal batch cpus might sometimes require a bit more (or less) juice or a system tweak
What does that involve? I wouldn’t know where to begin changing voltages or other parameters. I suspect I shouldn’t just faff about in the BIOS and hope for the best. :/
The Hoffman recipe is 12g of coffee, 250ml of water, 2 minutes steep time, give a small swirl to the recipient, steep another 30 seconds, then press down slowly over at least another 30 seconds. You can find the video on youtube.
There are many other factors involved such as the size of the grind, the uniformity of the grind, the temperature of the water, the steeping time, and the quantities of coffee and water – so really the recipe is just meant as a starting point. You will need to dial it in for each different batch of coffee.
Most of these factors have to do with caffeine extraction aka “yield”. More time steeping, hotter water, more water & coffee and finer grind all increase extraction but in different ways, and over-extraction usually ends up tasting bitter. The opposites decrease extraction and under-extraction ends up tasting sour. The Hoffman recipe is a balanced start.
With the Aeropress you have easy access to all these factors and can customize the brew extensively but you have to do some trial and error.
Well that’s the nice part about the Aeropress, the process is so customizable that you can find a good recipe for just about any coffee.
The Hoffman recipe is not meant to be perfect, just a safe starting point. It can’t possibly fit every single coffee batch out there.
Ironically, if Graphene would succeed, it would lead to a system that’s every bit as locked down as a manufacturer’s Android. GrapheneOS would also not allow you to have root etc.
IMO Graphene wants a place at the big player table. They’re not in it for user freedoms.
Is there a reason to expose your services to the whole internet? That’s what CF tunnels and Tailscale Funnel do.
I can’t really recommend either of them, Funnel forces you to use a .ts.net subdomain you can’t use your own domain. CF allows it but forces you to use their DNS service. Both CF and Tailscale play MITM with your HTTPS connection, meaning they decrypt and reencrypt it on the fly, meaning they are able to look at your unencrypted traffic.
If you really must expose your services publicly then get a cheap VPS, point your domain A
and AAAA
records at its public IPs, make a tunnel from your server to the VPS, and forward connections to port 443 on the VPS public interface through the tunnel to the reverse HTTP proxy running on your server (with mandatory TLS encryption and Let’s Encrypt certificates for your domain).
This way you get an unbroken TLS connection all the way through, with nobody in the middle.
The tunnel that you use between your server and the VPS can work behind CGNAT because it’s outgoing.
Technically the tunnel doesn’t necessarily need to be encrypted because it will only carry TLS connections anyway, but then you have to deal with authorization. It’s probably simplest to do an SSH tunnel.
I’m waiting for the day Google Recaptcha will ask me “is that traffic light red?” and after a couple of seconds “hurry up, I’m approaching the intersection!”
If by “easy” you mean someone else already spent 5 years and a nice chunk of cash training a model for it, which you get to use. And if you accept that it will not be accurate across all possible species and environments, only very specific subsets.
Honestly I’ll just send it back at this point. I have kernel panics that point to at least two of the cores being bad. Which would explain the sporadic nature of the errors. Also why memcheck ran fine because it only uses the first core by default. Too bad I haven’t thought about it when running memtest because it lets you select cores explicitly.