Terminal ultra fast file viewer for several GBs plain text file?
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Terminal ultra fast file viewer for several GBs plain text file?
Hello,
I am looking for an ultra fast file viewer for several GBs plain text file? My relatively large file is about 5.5 Gb. Just text and text.
I would like to view it, with scrolling down and GOTO to jump to given line.
== VIM
I have used first VIM but it will load the load document into the memory.
== LESS
Less is quite good but slow too. Less is still useable since it does not load the (large) file into memory.
The fastest scroll can be done with "cat" command. On tests just now on a 4.4MB file:
more (holding down space bar to end) 2m54.745s
less (holding down space bar to end) 2m57.617s
less (hitting ctrl-g to go to end=no scroll) 0m3.208s
cat (scrolls automatically from start to end) 0m1.133s
All of that is way to fast for humans to read. I'm assuming you don't want to just scroll but rather want to find text. For indeterminate text less is the way to go because it will let you scroll up and down from the text you find so you can see nearby lines whereas more only allows scroll down.
You're correct for files that large you should NOT use vim.
If you know the text you're looking for you could use grep to find the text out of the file. If you need multiple lines grep has flags that can give you additional lines after what you're querying. We used to have a process that would scan web/java logs looking for errors and would send any such line found followed by the next 30 lines (which would give more detail on the error) via email. This worked well as the log in question had a limited size and would quickly overwrite itself with continued errors so if we didn't extract and email we'd never know what the original error and detail was.
Last edited by MensaWater; 08-04-2017 at 01:53 PM.
5.5 GB ? Oh, I don't think you'll get better than less, even though it is slow. Most of the other applications will try to load the file into memory.
Just 2 things:
- if you know you are searching for something, you can use grep to look for the pattern and extract a certain number of lines around that matching line, and eventually you can redirect the output to another file, which will be smaller and much more manageable.
- you could split the file into several smaller parts. Let's think about it, you're not going to be able to read a 5.5 GB file in one go, so you don't exactly need to open the whole of it. You can use the sed command to extract specific lines or a range of lines from the file and again redirect it in order to constitute a smaller manageable file.
Some examples:
Code:
> grep -n -B 10 -A 5 "PATTERN" FILENAME > OUTPUTFILE
This will output all matching lines along with the line number, 10 lines before and 5 lines after matching line.
> sed -n 20421,25000p FILENAME > OUTPUTFILE
This will output a range of lines from line 20421 to line 25000 to the OUTPUTFILE.
Please read the man pages for grep and sed for much more information.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,585
Rep:
Since I'm not taking this off the zero-reply list may I ask whether there is a text editor which will just open and allow editing of files over, for example, 4GB? Or, indeed, any hex editors.
I am not sure of the reasons for the original question but I ahve to admit that not being able to open files larger than about 15% of my RAM size frustrates me.
To edit a file that size you're probably better off using sed to edit the specific line(s) than trying to pull the entire thing into an editor all at once.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,585
Rep:
Quote:
Originally Posted by MensaWater
To edit a file that size you're probably better off using sed to edit the specific line(s) than trying to pull the entire thing into an editor all at once.
Indeed, but to continue my off topic (?) further, for example, I wanted to try to retrieve some videos for a colleague and it appeared their headers were damaged. I would have loved to be able to open the files and compare them by eye -- i.e. looking not only for the same text but for patterns.
With 32GB of RAM it seems a bit silly that the biggest file I seem to be able to load into RAM and edit is around 4GB (or there abouts).
Again, apologies if this isn't the issue the original post is about but it seems relevant to me.
- you could split the file into several smaller parts. Let's think about it, you're not going to be able to read a 5.5 GB file in one go, so you don't exactly need to open the whole of it. You can use the sed command to extract specific lines or a range of lines from the file and again redirect it in order to constitute a smaller manageable file.
There's a utility command that's good at splitting up text files called "split". Here's are two examples which will split a text file into chunks 10,000 lines long (the first uses long human readable flags):
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,513
Rep:
There doesn't seem to be a Linux editor capable of handling gigabytes-large files with any speed. Slickedit is a closed-source program that will run on Linux, edits files up to 2TB, but costs 300.00USD for one user.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.