Ok, I see now how this works. In case someone is googling for this in the future - in short, the low/med/high choices is a way for quality settings assets to communicate with cameras, lights, etc.
In the end it’s a powerful system - cameras, lights, and volumes can set their quality to be abstractly high medium or low, and then a separate quality asset specifies what it means concretely in terms of texture sizes etc. And then the game can switch between different quality assets for high-end or low-end computers, or consoles, etc.
For example, you want one scene to have high quality shadows, but then in another scene you don’t. So you set the light in the first scene to use shadow map resolution “high”, and in the second a light with shadow map “low”. And then in the quality settings asset you can set “high, medium, low” to be 4k, 2k, 1k textures respectively. But then you want to add support for low-end machines - so you can add another quality settings asset and map “high medium low” to be 1k, 512, 256, or whatever, and let the player switch those in the options menu.
You can also do the same with cameras, volumes, etc. To set different LOD per camera, you set the defaults in the HDRP Global Settings one way (e.g. set LOD bias quality level “high”) and then on a specific camera check the box “custom frame settings”, and override it (e.g. set LOD to “low”). And similarly as above, then the quality setting asset can define what that means - high-end asset could map high/medium/low bias to be 3/2/1, while low-end asset could map to 1/1/0.5 and so on.
What makes this extra confusing is that 1. high/medium/low are very generic names - and also arbitrary - which makes googling hard, 2. how these are hooked up between quality assets and cameras/lights/volumes/global settings is completely not obvious, and 3. it’s also not documented in HDRP docs.
Basically the only place I’ve seen this actually explained by Unity is the Unite Now 2020 video which is highly recommended: