LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 02-14-2021, 05:18 AM   #1
Michael Uplawski
Senior Member
 
Registered: Dec 2015
Posts: 1,623
Blog Entries: 40

Rep: Reputation: Disabled
[regex] when would you prefer capture groups or String tokenizers?


Good afternoon.

I revisit Jeffrey Friedl's great book on Regular Expressions. Each time I wonder if I should learn Perl, just for the fun of it. And each time, I know that I did enough programming without Perl and fear the steep learning curve.

But remembering my past solutions in code, when Stings had to be matched, split up into pieces, analyzed in any way, I have to admit that I avoided Regular Expressions, if I could do the same thing with a simple tokenizer. When you can define a delimiter, many string-functions and -methods let you split-up and compare strings by fraction, and you will not need to know much about Regular Expressions, even if these functions and methods often accept a Regulalr Expression as parameter.

Would you formulate a rule or just present an experience which talks about giving precedence to one over the other?

I shall provide code-examples...
Code:
hulk@hogan:~$irb
irb(main):009:0> "hey, there is a 50€-note lying on the table".scan(/.*,/)
=> ["hey,"]
irb(main):010:0> "hey, there is a 50€-note lying on the table".scan(/\d+/)
=> ["50"]
irb(main):031:0> str = "a223abb233b".match(/(\d+)a/)[1]
=> "223"
irb(main):022:0> /a+(\d+)a.*(\1)b/.match "aaa233a234a233b"
=> #<MatchData "aaa233a234a233b" 1:"233" 2:"233">
Edit: Terribly dumb and not really correct in C:
Code:
#include <string.h>
#include <stdio.h>
int main(int argc, char** argv) {
  char* str = ""; 
  char* where = "";
  str = "Hey, there is nothing lying on the table";
  where = strstr(str, "nothing");
  printf("%s\n", where);
  return 0;
}
Do not take this as an example for anything. I leave it here for authenicity.

Last edited by Michael Uplawski; 02-14-2021 at 03:19 PM. Reason: now with actual capture group... not back-reference.
 
Old 02-14-2021, 06:52 AM   #2
boughtonp
Senior Member
 
Registered: Feb 2007
Location: UK
Distribution: Debian
Posts: 3,627

Rep: Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556
Quote:
Originally Posted by title
[regex] when would you prefer back-references or String tokenizers?
Uh? Your title asks about back-references, your question text makes no further references, or even allusions, to back-references.

Since it's not clear what you're actually asking, I've no idea if any of the following is what you're after or not...


Quote:
Originally Posted by Michael Uplawski View Post
I revisit Jeffrey Friedl's great book on Regular Expressions. Each time I wonder if I should learn Perl, just for the fun of it.
Don't conflate "Perl" and "regex".

Sure Perl has a powerful and flexible regex engine, but there is far more to the language than that, and I'm sure it's possible to use/learn it without regex, (if for some reason one wanted to).

It is certainly possible to learn regex without touching Perl.


Quote:
But remembering my past solutions in code, when Stings had to be matched, split up into pieces, analyzed in any way, I have to admit that I avoided Regular Expressions, if I could do the same thing with a simple tokenizer.
If there's a simple direct solution, use it.

I would never choose to write ".*," to match the first word of a sentence - the first choice would be a function that delimited the string with commas, e.g. "ListFirst(string)" or "string.split(',',2)[0]" (or similar), and the second would be patterns like "[^,]*," or "\w+(?=,)" depending on the specific need.


Quote:
you will not need to know much about Regular Expressions, even if these functions and methods often accept a Regulalr Expression as parameter.
There is not much you can do with regex that can't also been done another way. That doesn't mean regex isn't an incredibly useful tool when applied appropriately.


Quote:
Would you formulate a rule or just present an experience which talks about giving precedence to one over the other?
Same way you choose what to eat for dinner - a combination of experience, preference, and available options.

When you know regex, it can be a quick and concise way to describe text you want to do something with.

Even when I intend to use a proper parser for some data, it can be a great 90% solution for the first draft that lets me focus on the bulk of the code, and come back to the precise format later.

And it's generally a lot quicker to use tools like awk/grep/sed than doing adhoc string tokenization.


Last edited by boughtonp; 02-14-2021 at 06:56 AM.
 
Old 02-14-2021, 11:06 AM   #3
Michael Uplawski
Senior Member
 
Registered: Dec 2015
Posts: 1,623

Original Poster
Blog Entries: 40

Rep: Reputation: Disabled
Quote:
Originally Posted by boughtonp View Post
Uh? Your title asks about back-references, your question text makes no further references, or even allusions, to back-references.

Since it's not clear what you're actually asking, I've no idea if any of the following is what you're after or not...
I confused two expressions, “back-references” and “capture groups”. Although related, my question was more about the latter, as you have noted, it seems. Before “the book”, I did not know these existed, although I had seen the syntax, earlier.
Btw. There is a back-reference in my last Ruby-example.

Quote:
I would never choose to write ".*," to match the first word of a sentence - the first choice would be a function that delimited the string with commas, e.g. "ListFirst(string)" or "string.split(',',2)[0]" (or similar), and the second would be patterns like "[^,]*," or "\w+(?=,)" depending on the specific need.
Yes, that is how I rather use String.split[<delimeter>] than a regex. Now I wonder in how many cases the following actions could be facilitated if a pertinent regex permits the direct access to any of the created tokens. It certainly depends on the needs and the objective of the exercise. On the downside, the more a regex is capable to do, the less it may be maintainable. I do not know this for sure, for lack of experience.

Quote:
There is not much you can do with regex that can't also been done another way.
My insecurity originates from this fact, I guess.

Quote:
When you know regex, it can be a quick and concise way to describe text you want to do something with.
When working in a team, does it not increase the need for communication? I like to comment my code, but describing a regex to be fully understood by everybody is certainly less fun.

Quote:
Even when I intend to use a proper parser for some data, it can be a great 90% solution for the first draft that lets me focus on the bulk of the code, and come back to the precise format later.
Voilà. This is a thing to consider and to keep in mind. “Prototyping” with regex should even allow to highlight possible pitfalls and things to keep an eye on, when you devise a simpler tokenizer..!?
Quote:
And it's generally a lot quicker to use tools like awk/grep/sed than doing adhoc string tokenization.
With an emphasis on “adhoc”.

See? This is not all futile.
I just cannot do all the coding and do not have enough ideas to cover all these aspects, let alone the experience.

TY.

Michael

Last edited by Michael Uplawski; 02-14-2021 at 11:11 AM. Reason: Kraut2English
 
Old 02-14-2021, 11:55 AM   #4
shruggy
Senior Member
 
Registered: Mar 2020
Posts: 3,678

Rep: Reputation: Disabled
Jeffrey Friedl once blogged about the (in)famous Jamie Zawinski's quote:
Quote:
Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.
 
Old 02-14-2021, 01:36 PM   #5
boughtonp
Senior Member
 
Registered: Feb 2007
Location: UK
Distribution: Debian
Posts: 3,627

Rep: Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556
Quote:
Originally Posted by Michael Uplawski View Post
I confused two expressions, “back-references” and “capture groups”. Although related, my question was more about the latter, as you have noted, it seems. Before “the book”, I did not know these existed, although I had seen the syntax, earlier.
Btw. There is a back-reference in my last Ruby-example.
Ah, as is evident, I didn't really pay a great of attention to the examples. (I was going to comment on each one, then ...didn't, for some reason.)

I think I understand the angle you're coming from now, and hopefully the rest of this post answers it better.


Quote:
Now I wonder in how many cases the following actions could be facilitated if a pertinent regex permits the direct access to any of the created tokens.
I'm not sure I catch what you mean with "following actions", but this is what capture groups do - they capture the text, and can be used via in-pattern back-references, replacement string back-references, or output as part of the match data (in languages with such functionality).

Modern regex implementations (e.g. Perl/Python/Java) allow named capture groups as well as the traditional positional ones, and getting an array of matches each with named key/value pairs for the groups can be a nice way to tokenize. There is no problem maintaining when formatted sensibly...


Quote:
When working in a team, does it not increase the need for communication?
The less experienced a team (in any technology) the greater the need for communication, and there's nothing wrong with that - how else would anyone learn.

Quote:
I like to comment my code, but describing a regex to be fully understood by everybody is certainly less fun.
It sounds like you're saying you like to waste time by repeating yourself?

Comments should be used to explain code that cannot be readily understood by a proficient developer reading through the code (and should generally focus on why not what).
Inexperienced developers should not be relying on comments; when they encounter code they don't understand, that's their opportunity to learn.


At the same time, it's important to remember that most regex implementations do not require everything compacted into dense single-line strings that regex is sometimes infamous for, and have a comment mode flag that ignores unescaped whitespace and allows "#" to start comments.

There's a couple of examples here: https://www.linuxquestions.org/questions/programming-9/seeking-interesting-regex-samples-4175671487/#post6101696

 
Old 02-14-2021, 03:16 PM   #6
Michael Uplawski
Senior Member
 
Registered: Dec 2015
Posts: 1,623

Original Poster
Blog Entries: 40

Rep: Reputation: Disabled
Quote:
Originally Posted by boughtonp View Post
Modern regex implementations (e.g. Perl/Python/Java) allow named capture groups as well as the traditional positional ones, and getting an array of matches each with named key/value pairs for the groups can be a nice way to tokenize. There is no problem maintaining when formatted sensibly...
I have never used more than 2 capture groups, I believe. And my regex are mostly for single-shot matches, but I see the charm of named capture groups.

The test of email addresses is a nice example. May I ask if the non-capturing groups in this precise case really make a difference? Maybe it depends on the number of mail-addresses which are tested.
Also, the use of the comment-mode renders the code comparable to that of any other kind of tokenizer. Meaning that the amount of detail, covered by the regex, becomes an obvious advantage if the targeted data is relatively unspecific.

TY.
 
Old 02-15-2021, 08:04 AM   #7
boughtonp
Senior Member
 
Registered: Feb 2007
Location: UK
Distribution: Debian
Posts: 3,627

Rep: Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556
Quote:
Originally Posted by Michael Uplawski View Post
May I ask if the non-capturing groups in this precise case really make a difference? Maybe it depends on the number of mail-addresses which are tested.
You mean in performance terms?

I'm not sure I've ever measured it, but to me it's mostly an indication of intent - I'm grouping this but explicitly don't care about its value.

Thinking about it, I'm getting curious - since a regex engine needs to store backtracking information, it's going to have start position and length of every unit/atom already, so the difference between an unused capture group and a non-capturing group might effectively only be an extra int/ID being assigned to that particular section, or perhaps even only a boolean with counting done on retrieval.

If the capture group causes text to be internally duplicated then that could start adding up, given long enough text and/or enough matches, but even then - compared to the bloat of modern software - it'd likely take a lot before it became significant.

You've made me want to go investigate how different regex engines do it and see if there are any meaningful differences between them.

 
Old 02-15-2021, 08:11 AM   #8
shruggy
Senior Member
 
Registered: Mar 2020
Posts: 3,678

Rep: Reputation: Disabled
Quote:
Originally Posted by boughtonp View Post
You've made me want to go investigate how different regex engines do it and see if there are any meaningful differences between them.
An old, but still relevant article by Russ Cox (author of RE2).
 
Old 02-15-2021, 09:05 AM   #9
boughtonp
Senior Member
 
Registered: Feb 2007
Location: UK
Distribution: Debian
Posts: 3,627

Rep: Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556Reputation: 2556
Quote:
Originally Posted by shruggy View Post
An old, but still relevant article by Russ Cox (author of RE2).
Is that the one that boils down to "regex can be faster if you remove useful features"?

In any case there's no reference to capture groups in it, so it doesn't cover what I was referring to.

 
Old 02-15-2021, 10:31 AM   #10
shruggy
Senior Member
 
Registered: Mar 2020
Posts: 3,678

Rep: Reputation: Disabled
This is what he has to say about backreferences though:
Quote:
Backreferences. As mentioned earlier, no one knows how to implement regular expressions with backreferences efficiently, though no one can prove that it's impossible either. (Specifically, the problem is NP-complete, meaning that if someone did find an efficient implementation, that would be major news to computer scientists and would win a million dollar prize.) The simplest, most effective strategy for backreferences, taken by the original awk and egrep, is not to implement them. This strategy is no longer practical: users have come to rely on backreferences for at least occasional use, and backreferences are part of the POSIX standard for regular expressions. Even so, it would be reasonable to use Thompson's NFA simulation for most regular expressions, and only bring out backtracking when it is needed. A particularly clever implementation could combine the two, resorting to backtracking only to accommodate the backreferences.
At least, his own RE2 library implements capturing groups.
 
2 members found this post helpful.
Old 02-16-2021, 12:04 AM   #11
Michael Uplawski
Senior Member
 
Registered: Dec 2015
Posts: 1,623

Original Poster
Blog Entries: 40

Rep: Reputation: Disabled
On non-capturing groups, Jeffrey Friedl writes that either they are pratical in that they avoid global variables being “used up” (my words) for uninteresting data or they are, in fact and as boughtonp already sais, signalling intent. The gain in efficiency should depend on the amount of data, munged or the number of times a regex is applied in a loop or similar.

I am not sure about the gain in readability where capturing groups are mixed with non-capturing ones.., maybe in comment-mode. In my opinion they do, though, signal well the fact that a value is not used in the later analysis and will do so when you return much later to adapt your code to new requirements.

I will find out how qutebrowser integrates spell-checking; bear with my English for now.

Cheerio.

Last edited by Michael Uplawski; 02-16-2021 at 12:20 AM. Reason: ... bear. Wow. Really ... Darn.
 
Old 02-16-2021, 06:42 AM   #12
Michael Uplawski
Senior Member
 
Registered: Dec 2015
Posts: 1,623

Original Poster
Blog Entries: 40

Rep: Reputation: Disabled
Quote:
Originally Posted by boughtonp View Post
You've made me want to go investigate how different regex engines do it and see if there are any meaningful differences between them.
I am not in a position to contribute wise words to this endeavor, but just add that some tools switch between DFA and NFA as needed and guess that it can apply to PCREs, too. It is therefore really important to evaluate *engines* and *then* to see what is under the hood of any tool.

You only look at NFAs, I know. However, if there are results, keep them as talkative as possible. You never know when this thread on LQ comes up in a search result and who will use it in an argument.

Last edited by Michael Uplawski; 02-16-2021 at 06:45 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] ruby is interpreting my regex as regex, but is should be a string a4z Programming 2 09-13-2017 10:57 AM
[SOLVED] differences between shell regex and php regex and perl regex and javascript and mysql golden_boy615 Linux - General 2 04-19-2011 01:10 AM
Shell Wars: What would you prefer to see in a shell? Dralnu Linux - General 2 05-29-2006 06:53 AM
Video Turorial - what format would you prefer? The_JinJ Linux - Newbie 3 02-07-2006 04:57 AM
Which distro do you prefer and why? Can you explain it clear enough? kornerr Linux - General 1 06-11-2005 10:41 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 12:52 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration