Hotslogs is up, though not quite back

When HOTSlogs was handed off it was just the skeleton of the site. We were not given any of the existing data or user profiles. This means if you’re a returning user you will need to make a new HOTSlogs account.

We’re a small group of volunteers working to restore and improve HOTSlogs. You will see incremental improvements and old functionality come back over time. Thank you for using HOTSlogs.

Hotslogs currently only has a few days worth of replays from hots.api (same source heroesprofile uses) so the listings are pretty low, esp since there own uploader isn’t running yet. if people wanna re-register, volunteer to help them, or poke some of the sampling differences with a stick (till more replays get added) or update their settings to upload to both profile or logs, then there ya go.

9 Likes

Happy to see the website back online!

2 Likes

I’m going to be here for quite awhile.

https://gyazo.com/69157c2be9b761a6e7e7b991ab472835

interesting that even with their 3 day sample size, deathwing has the same ~60% win rate (top hero) as on heroesprofile

so the further nerfs mentioned by azjackson in other thread are warranted…

6 Likes

That’s a loooot of games!

You never uploaded through HOTSapi before then, I guess?
Like for Heroesprofile after Hotslogs went down.

I did that once HOTSlogs went down, dunno why I have to reupload all my games again.

I did that once HOTSlogs went down, dunno why I have to reupload all my games again.

Because all data from old Hotslogs is gone. :frowning:

Massive amounts of hots history vanished into internet dust…

Considering that HotSLog is restarting from scraps, is it really worth to use it instead of HeroesProfile that instead has an already extablished database (even if smaller compared to the old HotSLog’s one)?

2 Likes

Depends on the features and the aesthetics at this point imo.
If HP looks better and has more features, it should stay. If not, ppl should move back to Hotslogs imo.

2 Likes

How interesting that the websites display such similar stats with their sample sizes. It’s almost as if statistics was created for small sample sizes to represent large populations.

7 Likes

Exactly, but try telling that to the pea brains claiming “not every person uploads therefore it’s all guessing”… :grimacing:

3 Likes

Since Hotslogs (HL) is using 5+ herolvl games, I selected those out on Heroesprofile (HP).
After that I filtered a bit more to see only diamond+ SL games.
The results are:

HP vs HL:

  • WM: 55.6% vs 78.1%
  • DW: 61.4 % vs 72.2%
  • Cho: 55.9% vs 40.0%
  • Hammer: 50.9% vs 72.7%
  • Jo: 59.6% vs 54.9%

Just to select a few. So similar indeed. But alright, since I filtered, I made the sample size smaller, so let’s remove some (btw imo meaningful) filters (the rank filter) to increase the sample size, to even things out.

HP vs HL:

  • WM: 51.8% vs 51.6%
  • DW: 60.2% vs 63.1%
  • Cho: 50% vs 48.3%
  • Hammer: 48.8% vs 49.7%
  • Jo: 51.8% vs 51.8%

Much better! From 5-22% mismatch to 0-3%.
Really close, but first,

Kinda obvious to get more and more similar data if you use more and more from the same source.

Second, the argument was never about HP stats not being close to reality.
The argument was that it’s questionable how close they are to it.
The smaller the sample size it, the higher the uncertanity, plus these statistics have a range of error.

The source of the concern (at least from my part) is, that that range of error was stated by Blizz to be ±5% on old-Hotslogs (which had the smallest uncertanity with the biggest sample size), which is big enough for a Hero to go from “this wr is op/up” to “hmm, it’s kinda balanced”, because the aimed range for “balanced wr” by Blizz is 45-55%.

So these third site party stats should always be taken with a nice amount of grain of salt, instead of being the “ultimate argument and destroyer of debates” like some ppl are trying to use them as.

But what do I know, I have

:stuck_out_tongue_winking_eye:

2 Likes

sort of.

Having a poll check the populace distribution of something is different than checking if a variable had an influential reaction. The breakdown of what the stats indicate is generally associated with a call to action based on the stats; as such, some of the thresholds for interest can vary enough to make it harder to distinguish if something is an outlier, influenced by noise (or placebo), or influenced by skew (cuz of the sampling)

Since both sites sample from the same source, at present, the sort of differences evident does indicate potential trending differences in the sampling, esp when applying filters.

Similarly, stats like these would be better viewed as over-time samples – rather than as snapshots per the last update – that way it could map trends of change, rather than post the current numbers. Topical responses tend to look at the as-is number, and not the change% unless it’s really big (and big posts tend to be from smaller samping influence)

Part of the issue of a stat-reading is that those are particulars people overlook because stuff is “good enough” to demand their call to action. When the call isn’t met (sometimes as an influence over the deviation from source and sample) then they get upset because they thought the numbers posted were “good enough” and thus display mistrust to anything else that doesn’t agree with them.

A frontpage look at hotslogs, though a small sample, puts DW @ 63% w/r;
for HP, he’s at 58.93%, filtering out the lower levels does bring it up tp 60%. but the overall data point there exceeds 100%, thus indicated a direct example of potential skew; at a wider date range (two more minor patches) brings the skew up to 113% influence on deathwing.

  • Is the winrate influenced because the sampling exceeded 100%?
  • Is duplicate information a factor?
  • Is the winrate not influenced, but exceeding 100% just a display error?
  • If the trend from overtime has persisted downward, is the concern of dw just one of exposure (high ban) and he’s not as egregious as initial stipulations posted when seeing 60%+ rates because people finally adjust?

Obviously there’s the topic that stuff was confirmed that changes were going to happen; however, if people look at the high spikes, they may be expecting a far bigger change than may happen if the w/r was a higher peak because of the skew; if the change is below their expectations, then you may see reactions fall into camps of “oh, blizz is just showing how p2w he is” or “they have no clue what they’re doing,” “they’re always bad at balancing” and so on.

Deviations have influenced deterministic deduction: how people deal with deviation isn’t as direct as they demand.

The other point of concern is that the ‘best’ mapping these sites do is usually when there aren’t any filters applied, or, if it’s only filters that can accurately be mapped. When people start filtering for diamond+, these systems have to guess where the ranks fall, and distribute that over the population they’re mapping. As an example, Hp listed my storm league seasons to be in gold, but… I finished in diamond. It has to estimate the starting mmr, influence from season changes, and only estimate in proximity to those it matched with; my lower rating may pull others matched to me downward and thus exclude their data from the filtered trends despite that actually being within range of the sample. So how does that influenced talent percentages compared to the source?

So between sampling deviations, filtered skews, and estimation guesswork, it makes it harder to make direct calls to action based on specific values. One range of dates can see a hero exceed 56%, and others can see then 5% lower; some players consider a few constraints blizzard uses for balance, but otherwise neglect other ones.

And even then, a lot of the calls that are suggested could be made without needing to even reply on the stats at all; if something ‘seems’ too strong, then suggestions to bring it downward help; similarly, posting % responses doesn’t help those that have issues, but rather direct examples show it.

Generally, the biggest indicator may just be the population distribution; a high pop indicates a stagnant meta, so things will likely change to deter the popular picks – even if the stats don’t agree (as can be seen with some dev notes where low-success heroes continue to be highly popular) ; some stuff people call UP, others consider OP and then choose to argue over which to disregard because they argue over the deviations.

The last little aspect here is that some people content themselves to reinforce the stats presented; sometimes its an excuse to give-up, or not try as hard, other times its just a source of frustration that stems directly from the number shown, and not the other factors involved. Because some metrics of balance intent was conveyed, some of the players (even the ones that don’t post here) take that to influence their understanding of the game.

So some of the deviations have influence on expectations and that directly influences how people conduct themselves.

I too often find massively different results after cherry picking data.

1 Like

Like missing every one of my points except the fact that I’d like to be able to use filters?

1 Like

I mean, let’s be real, no one serious about balance assumed DW was balanced at any level of play, but the problem is the number of people complaining about him was so large you had whiners creating static.

Good news it’s back, I just hope it’s not riddled with malware like the old site.

I think the site changed hands several times, and one of the recent owners was just milking it. It’s not uncommon for that to lead to some rather lax ad vetting. It’s gotten so bad on so many sites that you really can’t afford to go anywhere without a good adblocker like Ublock Origin.

They use the same source so it shouldn’t be odd at all… They’re both using hotsapi now.

Oof yesterday i deleted 1k games, so i suppose they are doing nasty things in the recycle bin. Maybe if its quick ill do something, but if not im sending them to oblivion.