简体   繁体   中英

how can I convert all of the pages from a website to pdf?

I need to make edits for a website developer / designer and he asked me to print all of the pages and make comments directly on them.

There are about 35 unique pages that I want to edit, although there are thousands that can be automatically generated from a database, but these are numbered sequentially. Thus, I would like to only download pages with an index of 1 in the address, eg,

It would be great if there were a bash solution that I could use, but Ruby would also work - the site is written in ruby, and the developer is good with ruby and bash and some other languages - so if you could give some suggestions to get us started, that would be great.

I want to print all pages in the server.com/ directory except, if the page is indexed by record, those with index > 1:

  • server.com/records/
  • server.com/records/1
  • server.com/records/1/new

but not

  • server.com/records/2
  • server.com/records/2/new

or any pages with a ? in them like

  • server.com/records?letter=K

and etc

Is there a simple, automated way that I could convert all of the pages to pdf?

Normally I'd recommend good old Prawn , but now there's PDFkit that can use your HTML+CSS as-is.

There are good Railcasts on both PDFkit and Prawn

Here is a great article that can help you: http://jimneath.org/2009/02/16/creating-pdf-documents-in-ruby-on-rails/ You can also look at: princely :) https://github.com/drnic/princely

Good luck :)

wkhtmltopdf is an excellent tool that works nicely in Ubuntu

 wkhtmltopdf www.google.com foo.pdf
 xpdf foo.pdf

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM