[SOLVED] libarchive: dump memory archive to disk file
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
If I now try to unpack the archive in terminal, I get:
Code:
tar -xf output2.tar.gz
gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
On the other side, if I write the archive out to disk like that:
I did some code refactoring and it magically started to work correctly.
I don't know what happened and if it is going to be like that throughout the whole development cycle
Could somebody please confirm that the code I posted in the first post, should actually work and behave as expected (that is that with this code I should get correct on-disk archive)?
You didn't show enough code to determine what might be wrong, so I modified the "man archive_write" example code to use "archive_write_open_memory" and it worked fine:
Below comes my code, so if you have some comments, let me know. It's a C++ class, size and data are class members, for archive size and data buffer respectively.
Code:
/// Maximal archive size.
#define DATA_SIZE (1024U * 1024U)
/// Buffer size for file reading.
#define RBUF_SIZE (16U * 1024U)
/******************************************************************************/
int archive_t::create(filelist_t const &p_flst)
/******************************************************************************/
{
struct archive *a = NULL;
int res = 0;
int m_open = ARCHIVE_FATAL;
archive_entry *e = NULL;
int rfd = -1;
char rbuf[RBUF_SIZE];
if (prgnam.length() == 0U)
{
emsg() << "archive: PRGNAM must be set." << endl;
return -1;
}
if (p_flst.size() == 0U)
{
emsg() << "archive: file list is empty." << endl;
return -1;
}
if (size != 0U || data != NULL)
{
emsg() << "archive: previous archive not freed." << endl;
return -1;
}
a = archive_write_new();
if (a == NULL)
{
emsg() << "archive: not enough memory." << endl;
goto failure;
}
res = archive_write_set_format_pax_restricted(a);
if (res != ARCHIVE_OK)
{
emsg_archive(a);
goto failure;
}
res = archive_write_set_format_ustar(a);
if (res != ARCHIVE_OK)
{
emsg_archive(a);
goto failure;
}
data = new(std::nothrow) char[DATA_SIZE];
if (data == NULL)
{
emsg() << "archive: not enough memory." << endl;
goto failure;
}
m_open = archive_write_open_memory(a, data, DATA_SIZE, &size);
if (m_open != ARCHIVE_OK)
{
emsg_archive(a);
goto failure;
}
e = archive_entry_new();
if (e == NULL)
{
emsg() << "archive: not enough memory." << endl;
goto failure;
}
for (auto item = p_flst.begin(); item != p_flst.end(); ++item)
{
string apath;
string rpath;
ssize_t rsize;
apath = prgnam + "/" + item->name;
rpath = item->name;
archive_entry_set_pathname(e, apath.c_str());
archive_entry_copy_stat(e, &item->stat);
res = archive_write_header(a, e);
if (res != ARCHIVE_OK)
{
emsg_archive(a);
goto failure;
}
rfd = open(rpath.c_str(), O_RDONLY);
if (rfd == -1)
{
emsg() << "archive: " << strerror(errno) << endl;
goto failure;
}
rsize = read(rfd, rbuf, RBUF_SIZE);
while (rsize > 0)
{
ssize_t wrote = 0;
ssize_t wsize = rsize;
while (wrote < rsize && wsize > 0)
{
wsize = archive_write_data(a, &rbuf[wrote], rsize - wrote);
wrote += wsize;
}
if (wsize == 0)
{
emsg() << "archive: 0 size write." << endl;
goto failure;
}
if (wsize == -1)
{
emsg_archive(a);
goto failure;
}
rsize = read(rfd, rbuf, RBUF_SIZE);
}
if (rsize == -1)
{
emsg() << "archive: " << strerror(errno) << endl;
goto failure;
}
close(rfd);
rfd = -1;
archive_entry_clear(e);
}
archive_entry_free(e);
e = NULL;
res = archive_write_close(a);
m_open = ARCHIVE_FATAL;
if (res != ARCHIVE_OK)
{
emsg_archive(a);
goto failure;
}
res = archive_write_free(a);
a = NULL;
if (res != ARCHIVE_OK)
{
emsg_archive(a);
goto failure;
}
return 0;
failure:
if (rfd != -1) close(rfd);
if (e) archive_entry_free(e);
if (m_open == ARCHIVE_OK) archive_write_close(a);
if (data != NULL) delete[] data;
if (a) archive_write_free(a);
data = NULL;
size = 0U;
return -1;
}
Depends on the systems you target. From "man feature_test_macros":
Quote:
_FILE_OFFSET_BITS
Defining this macro with the value 64 automatically converts references to 32-bit functions and
data types related to file I/O and file system operations into references to their 64-bit coun‐
terparts. This is useful for performing I/O on large files (> 2 Gigabytes) on 32-bit systems.
(Defining this macro permits correctly written programs to use large files with only a recompi‐
lation being required.) 64-bit systems naturally permit file sizes greater than 2 Gigabytes,
and on those systems this macro has no effect.
I cobbled together a header for your class and a main and your code compiled and ran fine.
I'm a little "goto" averse, so I would probably use try, throw, catch to handle errors.
First of all thank you for taking the time to test the class in operation, I really appreciate that. Since then I slightly cleaned up the code and removed what seemed to be excess of code.
When it comes to _FILE_OFFSET_BITS I see I won't need it. My program is going to be run only on Linux and archives are going to be very small in size.
Quote:
I'm a little "goto" averse, so I would probably use try, throw, catch to handle errors.
There are two things. I haven't programmed in C++ in something like 5 years, but I'm programming a lot in C, so the "C" way is safer at this stage. Second thing is that, I also used to be extremely-anti "goto", but since then I loosen a bit the rules. Code written properly using "goto" is much better that incorrectly written code using "throw, try & catch". That being said, it doesn't matter much which "technology" you use as long as you design and code correctly, actually I think that both methods present the same risks.
I'm aware that throw & company would be more of C++ way of doing the things but, as I said, my knowledge is limited, so I prefer to use the known subset of C++.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.