Algorithms such as LZW neither know nor care what the data to be compressed "means to you." (Computers can't read.)
It is a provable fact, of course, that text is much more compressible than binary data simply because text has much more redundancy (at the bitwise level). A typical file of English text, for instance, uses only a fraction (26+26+20 or so) of the 256 possible values of "a byte," which realistically means that only 6/8'ths of the bandwidth is being used under worst-case conditions. With binary data, the probability is much greater that most of the 256 possible values are being used. That is to say, within any given block of (say) 4096*8 bits, there aren't too many usefully-repeated groups of bits.
File-compression basically works by identifying repeated groups of bits, within a window of data of some length, and replacing them with a single (bitwise) code...