Are CS:GO Teams More Inconsistent At The End Of A Season?
Using statistics and trends to determine once and for all whether the CS:GO inconsistency myth is true.
What is “inconsistency”, and why should anyone care?
The premise of this piece is to delve into a concept often spoken about in the CS:GO professional scene - the idea that as a season drags on, the consistency of results produced in top tournaments is lessened. This is frequently used as a justification to place the CS:GO Majors at the end of a season, and sometimes used as a complaint by talent and viewers alike during a particularly inconsistent tournament.

The obvious idea of inconsistency is the difference between the expected result and the actual result. For better or for worse, this is a highly debatable topic when it comes to many teams, as you have to consider a myriad of factors (players, matchups, maps, etc.).
To simplify this, I’ve gathered the results of many large events (not every one, and not without restrictions - more on that at the very end), and compared the placement of every team in that tournament with their HLTV ranking. Taking this absolute difference, you can assign each tournament an “inconsistency score” based on the average of the sum of differences. I will use this to measure the inconsistency of each tournament, and by measuring how far along each tournament is in a year, I can visualise whether the trend of inconsistency really is true!
To stay up to date on my CS:GO ramblings, be sure to follow my Twitter!
The General Result

With an average inconsistency across all these tournaments of approximately 4 (that is, every team on average placed 4 spots higher OR lower than their expected placement), there does not seem to be an obvious trend based on when a CS:GO tournament is held. Upon closer inspection, a few details reveal themselves - the tournaments in the first 20% of the year (The ELEAGUE Majors and Katowice) generally seem to be more consistent, whereas the latter half of the year (including some Colognes and other sporadic tournaments) seems to gather above the red line.
It is also important to note the gap around 60% of the way through - this is where the player break is. Let’s get down to business and directly compare Katowice and Cologne, then!
Katowice, Cologne, and The Majors
Before we delve into the implications here, there is something to note - every Cologne since 2016 and every Katowice since 2018 is part of this data (pre-2018 Katowice was not a 16 team tournament, which is relevant for my methods of calculation). This includes Katowice 2019 and Cologne 2016, which are also Majors.
The average inconsistency for all tournaments is around 4.04. Majors are almost exactly the same, at 4.01. Colognes do show a slight trend upwards, at 4.27 - but what is most noticeable is the below-average 3.27 of the last four Katowice tournaments. (If we excluded the Major in 2019, this becomes 2.92!)
This suggests that out of these significant tournaments, the most consistent tournament is almost always Katowice. If we decide to exclude the Katowice Major in 2019, the most inconsistent Katowice doesn’t even exceed the LEAST inconsistent Cologne (Katowice 2022 versus Cologne 2017/2021, if you’re interested).
The data certainly suggests that while the inconsistency at the end of a season is not a large discrepancy, the reverse is true - that the consistency at the BEGINNING of a season is remarkable.
Before I get into the asterisks regarding all this (HLTV ranking non-believers, you’ll have your chance), let me talk about the most and least consistent tournaments in CS:GO over the past several years.
IEM Cologne 2022: Fresh in our memory
Okay, I lied - this is not the most inconsistent tournament, although it is in the top 3. But since the fantastic Bo5 final between NaVi and FaZe is still fresh in some of our minds, let’s do a little recap and figure out what made this tournament inconsistent by looking at some outlier performances, both good and bad.
ENCE - ranked 3rd, exited 16th
ENCE in Cologne 2022 were victims of an incredibly unfortunate bracket. Their first draw was Vitality - and instead of playing against a more predictable lower bracket team like Movistar Riders, G2’s opening loss resulted in a lower bracket match between the #3 and #6 teams in the world at the time.
In another bracket, perhaps this result does not happen - but if we take the rankings at their word (which we should, since ENCE had recently placed second at IEM Dallas whereas G2 had been struggling for a while) then ENCE were the favourites in both games. But out of the many teams who underperformed at the top, ENCE came dead last - meaning I had to give them some attention here.
Cloud9 - ranked 4th, exited 12th
Just scraping into this list by virtue of my (arbitrary) cutoff of eight spots between current rank and result, Cloud9 are perhaps more significant on this list because they are one of the few teams to win a trophy in front of a crowd this year.
They started Cologne comfortably beating Outsiders, but then ran into Astralis and Liquid (two teams we will talk about later). Failing to win a map against either team, Cloud9 exit this tournament with no maps won against a top 10 team (ENCE did take a map off of Vitality) and a puzzling result to look back on. As a team that has recently won on LAN, I believe that this team can find confidence and update their stratbook going forward to remain competitive.
Astralis - ranked 12th, exited 4th
Now we start to get into the upsets - teams performing higher than expected. Astralis were a team that people were starting to write off as a failure - rumours of device and valde to replace Farlig and Xyp9x were swirling around. In the middle of all this, a neat 2-0 against Furia and Cloud9, two teams ranked higher than them at the time, enabled them to reach the quarterfinals.
Thanks to the overperformance of MOUZ (another topic for later), and the predictable strength of NaVi and FaZe, Astralis were able to reverse sweep MOUZ before they met their inevitable end against NaVi. The usual suspects of blameF and k0nfig were performing as usual - but notably, the prior liabilities of Farlig and Xyp9x were not as prominent either, with Farlig finding more impact with the AWP here than in previous tournaments. Still, I have less faith that this team as it stands is going to challenge the top in the future - and with trace saying “There’s not going to be any changes” in an interview with HLTV’s Professeur, I think this is as far as this team gets.
Movistar Riders - ranked 14th, exited 4th
A team with a surprisingly deep story behind it (check out Reddit user /u/eLvare345p’s summary!) managed to reach the semifinals of an international LAN. Movistar Riders, fresh off a local Valencia victory on LAN, followed their good form up with series wins over G2 and Vitality in the group stage to qualify for the quarterfinals. This would have been sufficient for me to count it as a significant disparity - but in their match against Liquid, they fought hard on the stage and managed to squeeze past the revitalised NA team before crumbling against an unstoppable FaZe.
Unfortunately, with SunPayus looking to go to ENCE, the future for this team looks bleak - it would already have been difficult to maintain that kind of dream run performance consistently going forward. Sadly, I can’t predict in favour of this team - but as with any underdog story, I’m happy to be proven wrong.
Liquid - ranked 15th, exited 6th
An opening loss to Spirit may have initially soured the mood, but a lower bracket run which included 00Nation, Furia, and notably Cloud9, directed this team to a playoffs spot - after this, they played Movistar Riders in a close series that could have gone either way.
The differentiating factor of this team is obvious - YEKINDAR. Not just in a fragging sense, but in a calling sense, according to EliGE. Why the core of the Grand Slam winning team needed a young entry fragger from Latvia to come over and help completely rewrite their playbook, I’ll never know - but the fact is that the team performed much better with him.
Of course, as many teams have probably figured out over the past several months, extracting YEKINDAR from his contract is not a simple task. He is still playing on loan for the time being - and with players like oSee developing further, and EliGE + NAF returning to top form, there is serious potential in this team moving forward.
MOUZ - ranked 18th, finished 6th
The international squad which started the year with NBK and immediately benched him managed to complete a lower bracket run of their own to qualify for playoffs. Taking a map off of NaVi in their opening game was already a good sign - they went on to eliminate Heroic, Vitality, and NIP, all teams that people were hoping to see more of. Their quarterfinal matchup against Astralis started well, but the inexperience of the team prevented them from making it any further.
In various comments on HLTV Confirmed, dexter (the pride of Australia and Aussie IGLs at the moment) essentially implied that the team had a big therapy session over multiple days that resulted in a restructuring of the team. Even more interesting than that, is the remarkable T sided performances seen from the team, including comebacks - things that are rare in this CT meta.
Pretty much every expert concludes that this run from MOUZ is delaying the inevitable with this team, but as a representative of Oceania I have to stand up for dexter a little bit. When you can point to something specific as the reason for your team’s improvement (the restructuring) then I think a “fluke” run can translate into a higher overall level from the team going forward. That, and the team’s inexperience overall, gives me hope for these boys going forward.
The Most Inconsistent CS:GO Tournament
As someone who began following the scene a few years after this tournament, I may not have all the specific details right, and the in-depth matchup analysis may be missing - but the statistics don’t lie, at least in this case.
PGL Major Krakow 2017 was the most inconsistent CS:GO tournament I had found over the past several years (honorable mentions include DH Malmo 2019 and ELEAGUE S2), and looking at the results page will tell you why.
Before we even get to the final, which was a #10 ranked team versus a #15 ranked team, let’s just go over some of the other weird results here.
An early iteration of the FaZe lineup (with allu and kioShiMa) was ranked 2nd, but left dead last, losing to MOUZ, BIG and Flipsid3.
Teams like G2, C9, NaVi, who were 4-6th at the time, all failed to make it past 11th place, with NaVi going out in 14th.
Virtus.Pro, still the legendary lineup at the time, had one of their last hurrahs here, making a semifinal while ranked 14th.
BIG was ranked 17th but still made it to the quarterfinals.
And the Major Final itself was Immortals versus Gambit - thanks to the upsets before this, Immortals only had to beat a lower ranked BIG to reach a semifinal against VP, who had run out of steam at this point. At least Gambit did go 3-0 in the group stage, and beat Astralis in the semifinal - but overall, this tournament was a breeding ground for upsets.
To put this in perspective - the PGL Antwerp 2022 Major had the final we were all hoping. An equivalent final (#10 vs #15 at the time) would have been Outsiders versus BIG. That alone tells you how crazy this Major was.
The Sanest CS:GO Tournament
While a few Katowices (2020, 2018) boasted remarkable sub-3 inconsistency scores, and PGL Antwerp this year did the same, the most consistent CS:GO tournament is actually DreamHack Masters Las Vegas 2017, with a score of 2.3125.
By the very nature of the concept, there isn’t much to talk about here! Not only did all the four bottom ranked teams (TYLOO, Renegades, Misfits, Complexity) all place last, but the only significant discrepancies in placements were OpTic going out 12th place as a top 5 team and MOUZ making top 8 as a 14th ranked team.
The 2nd best team won the tournament, the 3rd best team placed second, the top 1 team placed 4th - all of these results are not very interesting to talk about, because that’s the point! This tournament was probably the most sensible storyline to draft from a talent perspective. (VP beat Astralis and SK, #1 and #3, so their victory even feels right)
The Boring Technicalities
If you’ve made it this far, or you skipped to the end, you really must care about the nitty gritty - how did you calculate this stuff? Is it remotely accurate? Is this just cherry-picking data? And other such questions.
The Number Stuff
You already know that the inconsistency score for each tournament is based on the average difference between a team’s HLTV ranking and their actual placement. However, to keep it simple, I decided to compare like with like, and only look at tournaments with 16 teams - this is because pretty much every tournament I look at has 2-6 “non-qualifiers”, teams in the top 16 on HLTV that do not attend the tournament in question. This means tournaments with, for instance, 12 teams (like Katowice 2017) will have a larger proportion of qualifier teams, from regions like Asia, Oceania, NA, who are naturally ranked outside of the top 16 and are more difficult to fit in with the 16 team tournaments.
For the tournaments with teams outside the top 16 that were participating in the tournament, (which was every single tournament by the way) I simply assigned them the HLTV rank of 16. As they were expected to go out in last place, and because it makes no sense to use their actual HLTV ranking of 500, which would greatly skew my average statistics, this makes the most sense. I did note down the non-qualifiers in each of these tournaments for clarity’s sake, but didn’t find a use for that statistic in this article.
I also used the bottom value in any placement range - because realistically, that’s how everyone views a placement. Someone placing “5-8th” at a major did not come top 5, they came top 8, and the same logic flows down to placements like 9-12th. Of course, this can compound with non-qualifiers to make inconsistency in places where there might not be any - for instance, if the teams 9th through 12th are not present in a tournament, a lower ranked team might end up here because there are not enough last places to hand out! This should not have a large impact overall, but it’s worth mentioning.
The Accuracy of HLTV’s Rankings
With both my articles so far being about HLTV, you might assume I’m a little blind to the flaws of HLTV’s rankings for CS:GO teams. Allocations of points aside, one thing is for certain - the team ranked 12th and the team ranked 13th probably don’t have more than a couple points separating them. Once you get down from #1 and #2, rarely is there a significant point difference between teams, and it only gets worse the lower you go.
I believe, however, that in taking statistical averages of averages, this effect will also be minimised - there are cases where the 12th best team and the 4th best team are similar in points and thus an upset is more likely than it seems on the surface, and there are cases where it doesn’t happen at all. By averaging 35 tournaments, I think that the worst of these issues is avoided, and if you look at a specific tournament, you are looking at matchups and seeding and other factors beyond the basic ranking anyway.
The Second Season Kinda Sucks
23 of the big tournaments I looked at were in the first half of the season, and 12 were in the second half. Not to say that there aren’t great tournaments from a viewership perspective in the second half of a year - take for example the Blast Fall Finals, or the second Major. But the first half of the year is loaded with Katowice, a Major, and Cologne - and since we are stuck in an ESL/Blast duopoly when it comes to big tournaments that aren’t Majors, there aren’t really any 16 team events in the second half of the year.
Blast is trying to make their circuit prestigious with prize pools, production, and talent - but if they want their tournaments to have more meaning, their Global Finals should be 16 teams (in my opinion). Unless there is opportunity for many teams to qualify to your biggest tournament yet, it’ll always remain as a tournament mostly for Blast partners with a few teams that ran the qualifier gauntlet sprinkled in. It would also help me with further research about inconsistency in the second season!
Conclusion
CS:GO tournaments exhibit inconsistency as a natural side effect of teams improving and getting worse over time. To answer the titular question, I think there is a slight correlation between time in the season and inconsistency - comparing Katowice versus Cologne alone seems to show that exact phenomenon. Perhaps shifting the Majors to the end of a season would fix it, or perhaps it would make Majors more inconsistent - that’s something we don’t know for sure.
This analysis wouldn’t have been possible without HLTV’s extensive tournament and ranking database, so shout out to the people over at HLTV.org! Not that they need it from me, of course.
This analysis took a lot of time, maths, spreadsheet stuff, and fiddling that isn’t wholly evident in my writing here - if you want any of the (messy) raw numbers, or have any questions about anything, please message me on Twitter!

Thank you for reading and supporting my little CS:GO journalism sidequest!
Fascinating read. I love the thorough research and data pulling. Great visuals with clear explanations and findings!