Well, fonts
can be a problem if the one you're using doesn't have a needed character in it, but that's rather rare unless the text is in some uncommon language. An encoding problem is certainly more likely, particularly if there are just a few bad oddball characters scattered around the text.
gedit seems to offer no way to change the text encoding after a file is loaded, but the open dialog has a field at the bottom for selecting the encoding as you load it. You can also use the
--encoding option on the command line.
Most editors try to autodetect encodings when loading the file, but they aren't perfect at it. Windows has traditionally used
cp-1252, a Microsoft "variant" of
ISO-8859, for English text. Other variants are available for various European languages. Keep trying them until you find one that works. Linux uses
UTF-8, by the way.
You can use
iconv to batch-convert files from one encoding to another, once you know what they are. You can try running the
chardet command on it to see what comes up, but it's not always that accurate either (it likely uses the same library calls as the editors). If the output is less than 100% certain, don't trust it.
Finally, on a related note, don't forget that there's a difference between dos and unix style line-endings. Some programs auto-convert these as well, but not all. There are many options for manually converting line-endings in files, so I'll leave that as a research exercise for the OP.