You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.
So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).
My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.
The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:
switch trunk port
enp2s0f0 (physical)
vmbr1 (Linux bridge)
vmbr1.60 (Proxmox server interface)
vmbr1.100 (Proxmox VLAN interface)
virtual guest nic (w/ vlan tag and IP address)
vtnet1 (OPNsense “physical” nic, but actually virtual)
vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)
All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.
Like I said, it’s a headfuck when you first set it up. Interface-ception.
The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.