CURL is more than just a command-line tool; it's a versatile browser that empowers you to interact with web servers directly from your terminal. Whether you're a developer, system administrator, or simply a curious user, understanding CURL can significantly enhance your workflow. This guide will walk you through essential CURL commands and techniques, transforming you into a CURL master.
CURL (Client URL) is a command-line tool used to transfer data with URLs. It supports various protocols (HTTP, FTP, etc.) and allows you to download files, interact with APIs, and even automate web tasks. Its presence as a built-in command on macOS highlights its importance and utility.
Let's start with the basics. To fetch the HTML content of a website, simply use the following command:
curl http://www.yahoo.com
This command will display the HTML source code of Yahoo's homepage directly in your terminal.
Instead of displaying the content, you might want to save it to a file. CURL provides the -o
option for this purpose:
curl -o page.html http://www.yahoo.com
This command downloads the HTML content from Yahoo and saves it to a file named page.html
. You'll even see a progress bar in your terminal, indicating the download status.
If you're behind a proxy server, you'll need to specify it when using CURL. The -x
option allows you to define the proxy server and port:
curl -x 123.45.67.89:1080 -o page.html http://www.yahoo.com
Replace 123.45.67.89:1080
with your actual proxy server address and port.
Many websites use cookies to track user sessions. CURL allows you to handle cookies using the -D
and -b
options.
Saving Cookies: The -D
option saves the HTTP response's cookie information to a file:
curl -x 123.45.67.89:1080 -o page.html -D cookie0001.txt http://www.yahoo.com
This saves the page content to page.html
and the cookie information to cookie0001.txt
.
Using Cookies: The -b
option sends cookie information from a file with the HTTP request:
curl -x 123.45.67.89:1080 -o page1.html -D cookie0002.txt -b cookie0001.txt http://www.yahoo.com
This allows you to maintain a session across multiple requests, mimicking browser behavior.
Some websites restrict access based on the user agent (browser type). CURL allows you to modify the user agent string using the -A
option:
curl -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -x 123.45.67.89:1080 -o page.html -D cookie0001.txt http://www.yahoo.com
This command tells the server that you're using Internet Explorer 6.0 on Windows 2000, even if you're not. You can find a list of user agent strings online to use for different browsers and operating systems.
The "referer" header indicates the previous page visited. Some sites use this to prevent hotlinking. You can set the referer using the -e
option:
curl -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -x 123.45.67.89:1080 -e "mail.yahoo.com" -o page.html -D cookie0001.txt http://www.yahoo.com
Here, the server will think you're visiting from mail.yahoo.com
.
CURL excels at downloading files. We've already seen -o
for saving web pages. For files, consider these options:
-O
(uppercase O): Saves the file with the same name as on the server.
curl -O http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG
Downloading Multiple Files: CURL can download multiple files with sequences:
curl -O http://cgi2.tky.3web.ne.jp/~zzh/screen[1-10].JPG
This downloads screen1.JPG
through screen10.JPG
.
Customizing Filenames: Use -o
with variables for advanced control:
curl -o #2_#1.jpg http://cgi2.tky.3web.ne.jp/~{zzh,nick}/[001-201].JPG
In this example:
#1
refers to the first variable part: {zzh,nick}
.#2
refers to the second variable part: [001-201]
.This results in filenames like 001_zzh.jpg
and 001_nick.jpg
, preventing overwrites.
CURL supports resuming interrupted downloads and downloading files in chunks.
Resuming Downloads: Use -C -
to continue a partially downloaded file:
curl -C - -O http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG
Chunked Downloads: Use -r
to specify byte ranges:
curl -r 0-10240 -o "zhao.part1" http://cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3
curl -r 10241-20480 -o "zhao.part2" http://cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3
curl -r 20481-40960 -o "zhao.part3" http://cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3
curl -r 40961- -o "zhao.part4" http://cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3
This downloads the zhao1.mp3
file in four chunks. You'll then need to combine them:
cat zhao.part* > zhao.mp3
copy /b zhao.part1+zhao.part2+zhao.part3+zhao.part4 zhao.mp3
CURL isn't just for downloading; it can also upload files using the -T
option.
FTP Upload:
curl -T localfile -u name:passwd ftp://upload_site:port/path/
Replace localfile
, name:passwd
, upload_site:port
, and path/
with your file, credentials, server, and destination path.
HTTP PUT Upload:
curl -T localfile http://cgi2.tky.3web.ne.jp/~zzh/abc.cgi
CURL can simulate form submissions using GET and POST requests.
GET Requests: Append parameters to the URL:
curl http://www.yahoo.com/login.cgi?user=nickwolfe&password=12345
POST Requests: Use the -d
option:
curl -d "user=nickwolfe&password=12345" http://www.yahoo.com/login.cgi
Choose the appropriate method based on the server's requirements.
File Uploads via POST: Use the -F
option to simulate <input type="file">
fields:
curl -F upload=@localfile -F nick=go http://cgi2.tky.3web.ne.jp/~zzh/up_file.cgi
This simulates uploading localfile
and submitting the form.
HTTPS with Local Certificates: Use -E
to specify a local certificate for secure connections:
curl -E localcert.pem https://remote_server
Dictionary Lookups: Use the dict://
protocol to look up words:
curl dict://dict.org/d:computer
CURL is a powerful and versatile tool for interacting with web servers from the command line. By mastering these techniques, you can automate tasks, debug web applications, and explore the internet in new and exciting ways. Experiment with these commands and options to unlock the full potential of CURL. Remember to consult the official CURL documentation for a comprehensive overview of all its features. Also, be mindful of website terms of service and avoid using CURL for malicious purposes.