Get item_(sub)classes out of the auction house api


I have a mysql table (auction_data) where I store the auction house api data.

item_id | item_class | item_subclass | actual_price | actual_quantity |last_update
82033   |            |               | 5727700      |      12         |2020-09-25 16:30

To get the above datas I use this loop:

foreach ($array['auctions'] as $auctiondata) { 

$sql = "REPLACE INTO `auction_data`
                (`item_id`, `actual_price`, `actual_quantity`, `last_update`)
                ('{$auctiondata['item']['id']}', '{$auctiondata['buyout']}', '{$auctiondata['quantity']}', NOW())";

if ($conn->query($sql) === TRUE) {
} else {

The variable $array is the “php json decoded” array of$token";`

As you see I still need the item_class and item_subclass of all the items in the table.

For these informations I would need to loop every single_item id through the

/data/wow/item/{itemId} API

and " json_decode" these arrays.

Could you help me how to code that?

1 Like

General idea

You need more tables for storing item classes, subclasses and any other shared references you may want to cache.

Start by creating a table structure for classes that can hold the information returned by /data/wow/item-class/index and /data/wow/item-class/:id.

After that create a table for storing subclasses returned by /data/wow/item-class/:class_id/item-subclass/:id. For this table you’ll need a composite primary key using both item_class_id and item_subclass_id, item_class_id should also be a foreign key to the previous class table.

Once you have classes and subclasses you can start working on your items table using a composite foreign key to the item subclass table.

After that you need to modify your auction table to include a foreign key to your items table.

Have a look at the table structure at as it is also a relational database and stores the item structure like this.

A few observations

It is better to have two separate routines in this scenario:

  • One for keeping the item structure up to date by taking advantage of the new item search endpoint and its pagination feature.
  • One for fetching and updating the actual auction data.

Since you are using PHP, make sure to use multi_curl for a big performance improvement by fetching multiple items at once, of course you’ll need some sort of queue for updating the items.

Remember to monitor the HTTP status 429 in case you exceed your API limit (usually 36k requests / hour).

Sample tables adjust as needed


id name
0 Consumables


id item_class_id name
0 0 Explosives and Devices


id item_class_id item_subclass_id name
109122 0 0 Megawatt Filament
1 Like

Thank you for your detailed answer. It took me a bit of time to set up the multi_curl to fetch the item informations. Now it works, but I think I have the problem that it makes more than 100 calls per second so I dont get all the item informations stored in mysql.

Is this possible? You write something “queue for updating the items”. Can you explain it a bit more. I prefer an easy solution here. Maybe something directly in the multi_curl function.


I also have a question regarding

taking advantage of the new item search endpoint and its pagination feature.

In my multi_curl url I use the endpoint:


Which advantage do I have with the search endpoint?


Like I said:

You should not fetch items based on the return of the auction data. Instead you should have a separate job just for updating your items database.

The idea is use the new search feature to fetch all items 100 at a time until the returned array is empty (which means you got the last page already.

This routine will preload all items so you don’t have to worry about it later when you are fetching auction data.

Also note item data is static, so you only have to run this routine if a new patch comes out.

You don’t have to call /data/wow/item/{itemId} unless you need more information not exposed by the search API.

It makes no sense fetching the same item over and over because it is present on different auction entries as the item data itself is the same.

Sample algorithm:

$page = 1;
do {
  $data = fetch("$page&access_token=XXX");
  foreach ($data->results as $entry) {
    // Insert/update the item on your DB
} while(count($data->results) > 0);

An important note on the use of multi_curl: It will increase resource usage, so you have to be aware of the infrastructure you’ll use. Also in this scenario if you don’t have to fetch each item individually using the /data/wow/item/{itemId} you might use it to fetch 10000 items at a time by fetching 100 pages from search in parallel.

Please note I’m a former PHP developer, so the information I provided might be outdated, please double check this on PHP docs. I also wrote those functions directly in the forum, some errors are expected, it should serve as a starting point tho.


search just has 10 pages (1-10) with 100 entries per page. Where do I get the rest of pages?

With the 11th page I get this error


Oh boy. That is new to me. The documentation does not mention a cap for those endpoints, at least I haven’t seen that.

I was in the middle of updating my own projects to use this, to be honest so far I had only used 5 pages to check the consistency of IDs returned by this endpoint. I’ll run some tests to see if there is a workaround. I’m sorry for this inconvenience.

I’ll post an update here if I find anything useful for that.

1 Like

No problem. I don´t understand why this is limited.

It seems this endpoint is actually capped at 1000 records per query. You can get 10 pages of 100 records, or only one page with 1000 records.

There is a workaround for this, we can use the min feature to get the next 1000 items by setting the minimum item ID to the next item from the last query:[1]&_page=1&access_token= will return the first 1000 items, we then get the id of the last item (2429) and add one, then query the next 1000 like this:[2430,]&_page=1&access_token=

$startingItem = 1;
do {
  $data = fetch("[$startingItem,]&_page=1&access_token=");
  foreach ($data->results as $entry) {
    // Insert/update the item on your DB
  $startingItem = $data->results[array_key_last($data->results)]->data->id + 1;
} while(count($data->results) > 0);

Important: While there is no restriction, we should wait on a blue answer as if this workaround is acceptable. It doesn’t seem like this endpoint was designed to allow such use, otherwise there wouldn’t be a cap.

1 Like

Thank you for all your efforts.

For the case we can use it. Normally I use json_decode to get a php array. After that I´ll run the foreach loop to update my database.

So in your script I decode like that inside of the foreach loop and then run my databse query?

phparray = json_decode($data);

In this example I posted I assume the fetch function is already returning the decoded json data, the foreach is iterating the decoded result.
Since we cant safely use the pagination anymore, it is not possible to use multi_curl to fetch multiple searches in parallel, on the other hand we are getting 1k records per request.

You must adjust this as needed.

I get the following error:

0 Call to undefined function fetch()

Probably the mistake is in the use of the fetch function.
I don´t know how the fetch function can return the decoded json to the foreach loop.

I try it like that:

$startingItem = 1;

do {

$data = fetch("[$startingItem,]&_page=1&locale=de_DE&access_token=xx");

//Get all item ids in mysql table "items"

foreach ($data->results as $entry) {

	$sqle= "REPLACE INTO `items`

 if ($conn->query($sqle) === TRUE) {
    } else {
       echo "Problem";

  $startingItem = $data->results[array_key_last($data->results)]->data->id + 1;
} while(count($data->results) > 0);

There is no fetch function on PHP, I didn’t provide you a ready to use code, just a sample on how to do the logic.

You have to implement your own function for fetching data from the API using cURL or whatever, fetch in that context is the function retrieving the decoded json from the search endpoint.

Hi all,

The 1000 result limit is indeed an intentional limitation applied to the result set. Furthermore, the pattern described by Schiller using the min functionality is indeed an acceptable usage of the Search APIs, so no worries there.

Hopefully, this clears up some of your questions.