Wget linux help

wget -c -U Mozilla "http://musicdiving.com/get/album/b521da052b9c0f6e6c844277de2c8a4e/G_ExtractioN_Breaks_-_DistortioN_G_Break-(NFB046)-WEB-2011-BPM"
 
Is it because that string changes in the link? Where do you get the link from that you want to download?

MusicDiving.comb521da052b9c0f6e6c844277de2c8a4e/G_ExtractioN_Breaks_-_DistortioN_G_Break-(NFB046)-WEB-2011-BPM

Do you find that you are downloading the link rather than the content? That doesn't look like a file in that command.
 
Last edited:
I can't get the site to load at all now :s. It times out.

There might be something in there to stop automated downloading. You would need to look at the headers and transaction details using "Live HTTP Headers" or Wireshark. Then try to emulate your browsers GET and POST requests using something like WGET or cURL.
 
If the site is loading for you then run a capture using Live HTTP Headers during the whole process of loading the site and starting the download. Email it to me if you want, I've been using this a bit lately. At a guess, it will have some session authentication and might use cookies.
 
K site seems to be back up and running now this is the capture after i press the download button

----------------------------------------------------------
http://musicdiving.com/get/album/b521da052b9c0f6e6c844277de2c8a4e/G_ExtractioN_Breaks_-_DistortioN_G_Break-(NFB046)-WEB-2011-BPM

GET /get/album/b521da052b9c0f6e6c844277de2c8a4e/G_ExtractioN_Breaks_-_DistortioN_G_Break-(NFB046)-WEB-2011-BPM HTTP/1.1
Host: musicdiving.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9) Gecko/2008052906 Firefox/3.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: Drum'n'Bass | Music | MusicDiving.com
Cookie: PHPSESSID*************************************

HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Sun, 03 Apr 2011 16:27:16 GMT
Content-Type: application/force-download
Connection: keep-alive
X-Powered-By: PHP/5.3.6
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Cache-Control: private
Pragma: private
Set-Cookie: PHPSESSID=deleted; expires=Sat, 03-Apr-2010 16:27:15 GMT; path=/
content-disposition: attachment; filename="G_ExtractioN_Breaks_-_DistortioN_G_Break-(NFB046)-WEB-2011-BPM.zip"
Content-Transfer-Encoding: binary
Accept-Ranges: bytes
Content-Length: 15271939
----------------------------------------------------------
 
Last edited by a moderator:
Do you log into this site?

I'm getting an error on the page.

An error occurred

Application error


You probably need to enable cookie handling for WGET and you need to give the page the link is on as a referrer.

Do you need to use WGET for this? I could do it with cURL and PHP but that would mean you would need a PHP server running. As you're on Linux that might be set up but you need the cURL extension enabled.

This might be what you need:

Code:
You don't have permission to view the code content. Log in or register now.
 
Last edited:
aye they seem to have some server issue :( Also i have a php and cuRL running as i run a torrent site :p
 
Nice :).

I'll have a go when their site is working.

Are you after a script that gets the latest albums?
 
Aye was looking into an autoupload script i have a semi auto one atm but need to some how make it fully automated
 
Here's a start:

Code:
You don't have permission to view the code content. Log in or register now.

What you need to do is figure out what happens when you upload or download using your browser then configure cURL to emulate it. Log the headers and then most stuff can be copied into the CURLOPT parameters (like your user agent etc.). Cookies are handled automatically but there might be some security token you need to pass back when POSTing, you can parse it from the page HTML using PHP string functions or regular expressions.
 
Let me know how you get on, I've been playing with cURL for a few weeks. Not for moving files around though, apart from for debugging (echoing cURL output to the browser can get confusing as the browser has its own session and cookies), but it doesn't seem too different.
 
Back
Top