awk is generally the most appropriate tool to use when working with column-delimited text.
But
grep can be used here. You just need to give it an a regular expression that targets the appropriate line patterns.
Code:
grep -Ev '^(xs|yx)\>' infile
The expression breaks down as "
^", the beginning of the line, "
(xs|yx)", either of the strings "xs" or "yx", and "
\>", a positional anchor matching the end of a word.
As you can see this particular example is quite easy, as you just need to target the first two characters on the line. For columns in the middle of the line, the regex would have to be more complex.
If you don't already know about regular expressions, I highly recommend taking the time to learn. It's perhaps the single biggest "bang for the buck" topic you can learn in coding. All the major text editing tools support them.
Here are a few regular expressions tutorials:
http://mywiki.wooledge.org/RegularExpression
http://www.grymoire.com/Unix/Regular.html
http://www.regular-expressions.info/
Speaking of regex, Colucix's last example has a slight flaw.
Code:
awk '$1 !~ /[xy][sx]/' file
"
[xy][sx]" will match
all combinations of those characters, so "xx" and "ys" would also be eliminated from the output. Also, it relies on the assumption that that the field only has two characters, as it would also match any longer entry with those characters in them, such as "ab
xscd".
So it would be better to use a similar expression to the one I used in grep.
Code:
awk '$1 !~ /^(xs|yx)$/' file
Since we're only testing field one, we can use the more natural "
$" line-ending anchor, instead of the "
\>" word anchor.