Main Menu

Recent posts

#11
WebCopy / Special Character in Web Addre...
Last post by blaidd31204 - May 28, 2024, 04:30:10 PM
I am trying to copy the following website and I believe the special U character (the one with a tent symbol like over the #6) is causing a problem.  I have the most current version of Webcopy.  The Yield message that appears does not have any words to indicate the exact problem (I have changed my screen resolution to see if that may help see any message wording but, still no idea if that is the case).

https://forgottenrealms.fandom.com/wiki/Faer%C3%BBn

How do I get this website?  Thanks!
#12
WebCopy / Re: Multiple download threads
Last post by Yukislk - May 28, 2024, 09:41:15 AM

The more in-depth explanation is that this is an old logged issue that users have requested for a long time.
run 3
#13
ImageBox / Re: Selection on image
Last post by needfulhead - May 27, 2024, 08:07:03 AM
If the click occurred outside the selection, we clear the selection by setting selection to Rectangle.Empty. You can also choose to refresh the ImageBox control to update the display and remove the selection visually.
#14
WebCopy / Schedule / Periodically / Auto...
Last post by Pirreke - May 17, 2024, 09:38:19 AM
Hi,

How do I setup a periodcally autmoated download (scheduled once a month) in WebCopy?

Is there an option to do this? Can't find it in the menu. Online results are confusing.

Regards,
Pirreke
#15
WebCopy / Re: URL not writing to downloa...
Last post by GilbertBergeron - May 08, 2024, 07:00:09 PM
I also had the same problem.
#16
WebCopy / Re: SSL/TLS error: Could not e...
Last post by yodeltired - May 03, 2024, 09:44:32 AM
I am also facing same issue. :(
#17
WebCopy / URL not writing to download fo...
Last post by FrankWard - April 11, 2024, 03:59:41 PM
I'm trying to download a blog. I've encountered several errors I was able to mitigate using regex. However, I'm not sure how to address this one.

When downloading the site it pulls various images from domains like.

http://3.bp.blogspot.com/_-sFohRgxOBI/R3natx-8SXI/AAAAAAAABiA/mp-2BeZnnYk/s1600-h/happy-new-year+Woody+Woodpecker.jpg

In the download folder it's not putting the domain as a folder. It's just writing a TON of the urls starting with the "_"character after the domain like so..

_-sFohRgxOBI/R3natx-8SXI/AAAAAAAABiA/mp-2BeZnnYk/s1600-h/happy-new-year+Woody+Woodpecker.jpg

This happens with the following domains, but not others, so it appears it may be a bug.

http://1.bp.blogspot.com/
http://2.bp.blogspot.com/
http://3.bp.blogspot.com/
http://4.bp.blogspot.com/

Is there a way to make the program remap these domains to specific folders in the downloaded site hierarchy?
Any advice? Thanks!
#18
WebCopy / How to auto skip warning of Ja...
Last post by IVAN_CYBERPUNK - April 05, 2024, 01:48:34 PM
Hi! I'm trying to make copy of some local web page using Cyotek WebCopy 1.10.0.898. After a little while app shows me message "This website appears to be a JavaScript application". After pressing "Ok" on warning screen - app starts to scan web page and successfully finished this process. I have no need to run this script, i'm only want to save web page as html file. So i wonder if is a solution to not press this button all the time? I will add warning screen and fragment of html file with this JavaScript. Thank you.
#19
WebCopy / Re: Rule to save HTML files on...
Last post by gary1854 - March 21, 2024, 06:05:58 PM
I tried this using application/pdf to download only pdf files from a site and it downloads nothing at all. 

Quote from: Richard Moss on February 23, 2023, 07:37:29 PMHello,

Thanks for the question. You don't need to use rules for this as there is a simpler approach

  • Open Project Properties (Project | Project Properties)
  • Select Content Types in the left hand tree
  • Click Include only resources with the content types listed below
  • In Types to include, enter
    text/html
  • Click OK to apply the changes and close the dialog
  • Copy the website

Regards;
Richard Moss
#20
WebCopy / Help copying single file forma...
Last post by gary1854 - March 20, 2024, 04:46:41 PM
Please help!
I have read the tutorial thoroughly on similar to what I want to achieve the tutorial where it shows you how to use rules to exclude all file types except image formats only I am trying to achieve the same but only for PDF format, I want to scan a whole site and only down PDF files From it It.  I am clearly doing something wrong using the rules feature only instead of using the image formats using my PDF format. It will ignore everything and download absolutely nothing at all, but I know that sees the files because if I don't use any rules at all until it to scan a site I can come back 12 hours later. It will still be scanning the site and will have already downloaded well over 1500 PDF files again all the PDF files all the files and everything else that's coming along with it and bloating my drive please someone is there a video on that tutorial where I could see it. I've tried looking through YouTube. I can't find anything if anyone can show me in a video of how exactly they're achieving this please let me know. Thank you. PS I have also tried not using rules and using the content type application/PDF and it doesn't find any PDF files but again I use no rules no exceptions. Nothing just the site all the PDF files just fine.