Retrieve partial web page

Is there any way of limiting the amount of data CURL will fetch? I’m screen scraping data off a page that is 50kb, however the data I require is in the top 1/4 of the page so I really only need to retrieve the first 10kb of the page.

I’m asking because there is a lot of data I need to monitor which results in me transferring close to 60GB of data per month, when only about 5GB of this bandwidth is relevant.

I am using PHP to process the data, however I am flexible in my data retrieval approach, I can use CURL, WGET, fopen etc.

One approach I’m considering is

$fp = fopen("","r");
$data_to_parse = fread($fp,6000);

Does the above mean I will only transfer 6kb from, or will fopen load into memory meaning I will still transfer the full 50kb?

Here is Solutions:

We have many solutions to this problem, But we recommend you to use the first solution because it is tested & true solution that will 100% work for you.

Solution 1

This is more an HTTP that a CURL question in fact.

As you guessed, the whole page is going to be downloaded if you use fopen. No matter then if you seek at offset 5000 or not.

The best way to achieve what you want would be to use a partial HTTP GET request, as stated in HTML RFC (

The semantics of the GET method change
to a “partial GET” if the request
message includes a Range header field.
A partial GET requests that only part
of the entity be transferred, as
described in section 14.35. The
partial GET method is intended to
reduce unnecessary network usage by
allowing partially-retrieved entities
to be completed without transferring
data already held by the client.

The details of partial GET requests using Ranges is described here:

Solution 2

try a HTTP RANGE request:

GET /largefile.html HTTP/1.1
Range: bytes=0-6000

if the server supports range requests, it will return a 206 Partial Content response code with a Content-Range header and your requested range of bytes (if it doesn’t, it will return 200 and the whole file). see for a nice explanation of range requests.

see also Resumable downloads when using PHP to send the file?.

Solution 3

You may be able to also accomplish what you’re looking for using CURL as well.

If you look at the documentation for CURLOPT_WRITEFUNCTION you can register a callback that is called whenever data is available for reading from CURL. You could then count the bytes received, and when you’ve received over 6,000 bytes you can return 0 to abort the rest of the transfer.

The libcurl documentation describes the callback a bit more:

This function gets called by libcurl as soon as there is data received that needs to be
saved. Return the number of bytes
actually taken care of. If that amount
differs from the amount passed to your
function, it’ll signal an error to the
library and it will abort the transfer

The callback function will be passed
as much data as possible in all
invokes, but you cannot possibly make
any assumptions. It may be one byte,
it may be thousands.

Solution 4

It will download the whole page with the fopen call, but then it will only read 6kb from that page.

From the PHP manual:

Reading stops as soon as one of the following conditions is met:

  • length bytes have been read

Note: Use and implement solution 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from or, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply