Show more

tired: wardriving
wired: wardialing
inspired: installing gpt-2 and asking it "What is the phone number to call?" and dialing whatever it tells you

Guess I'll convert it back to amdgpu-pro + openCL and never run tensorflow.

The answer once again seems to be, "buy more hardware to do what you want to do". Oy.

The "converting an old Radeon mining rig to general compute" saga continues: upgraded to Ubuntu 18.04, got the ROCm kernel installed, and it turns out it can only use GPUs in the big slots (unlike openCL, which can use 1x risers). This rig has 6 cards, so 4 sit idle. 😑

gpt-2 makes at least as much sense as the package management scheme of your favorite GPU vendor.

Oh, this was so worth it.

Model prompt >>> Installing the video card drivers proved difficult.
=====
When trying to install a graphics driver from a USB hub, we needed to find the drive's BIOS and reset the card. Luckily, one of these BIOS drivers is named UEFI 4.1.8...

Maybe I just didn't downgrade that kernel hard enough. BRB, getting a bigger hammer.

I mean, this is exactly what virtualization typically saves us from, but it can't because hardware. And this has happened at least half a dozen times in the last two years.

ML on GPUs is great. Computers are great.

Meanwhile on the Radeon box, amdgpu-pro can't install dkms because of kernel header mismatches for the default 16.04.6 kernel (silly me for upgrading from 16.04.4 all those months ago...) Downgrading the kernel results in additional compiler errors. The sharks are circling. Halp.

3 hours later, still installing video drivers. The rubric of Tensorflow requiring specific NVIDIA hardware and Cuda 9.0 (NOT later) and Ubuntu 16.04 (NOT 18.04) might just mean you-can't-get-there-from-here.

The lesson I seem to learn over and over again is "you don't own the right video card for what you're trying to do". CUDA compute capability, AMD vs. NVIDIA in general, opencl vs. rocm vs. cuda, tensorflow snobbery... bah.

Maybe someday trying out new ML demos won't involve removing video drivers, installing a new kernel, creating and obliterating half a dozen virtualenvs, giving up and upgrading docker, and finally abandoning all hope and just running it in CPU mode... But today is not that day.

Full disclosure: I'm not affiliated with Dante Labs, and I have not (yet) used their service. They also don't seem to disclose whether they use your genetic data for any purposes of their own. But $299 for a 30x bam is an amazing price, and worth the uncertainty for me. YMMV.

Show more
hax0rz.lol

Where the 1337 meet to federate. Home of the most interesting local feed on the Fediverse!