These aren't exactly newbie questions, which is why I suspect you haven't gotten many responses. I'm not an expert on the Linux memory subsystem, but I can try to take a crack at this.
1. On 32 or 64 bit systems? I thought HIGHMEM went away in x86-64 land, but I'll admit I'm not at all sure about that/ Assuming you are talking about 32 bit systems the size of the virtual address space of a process is 4 GB, which was IIRC was mapped with the top 1 GB as kernel space and the bottom 3 GB as user space (i think there was an option for a 2 GB/2 GB split as well). In terms of physical memory, up to 4 GB could be used, unless PAE was enabled. With PAE, each process was still limited to 4 GB, but the kernel itself could map up to 64 GB IIRC of physical memory. In x86-64 land, the virtual address sizes are huge (64 bits - but virtual addresses tend to be only 48 bits at least in the chips I have), but the physical address lines are a bit smaller; e.g. on one of my servers with dial Intel X5650 processors (ca 2010-2011 era chips), the physical address bus is 40 bits, as reported by /proc/cpuinfo. This would be enough to address up to 1 TB of memory. Newer machines have larger buses; e.g. another machine I have with dual E5-2630 CPUs still have the 48 bit virtual address size, but a 47 bit physical address bus, sufficient to address up to 64 TB of RAM. I think hat the biggest quad or eight socjket server motherboards only support 4 or 8 TB of RAM, so this is probably good for now :-).
2. You could probably use most of it, and force a lot of swapping to disk to happen. But you could not use all of it. Some would have to be reserved for the OS.
3. I think this would be somewhat architecture dependent based on how the MMU lays out page table entries. At a basic level, how many 4 KB pages are there within 256 MB? This is a fairly simple math problem...