Diablo 3

Augments – A Spectrum of Compromise between Replacing and Stacking

diablo17 - Augments - A Spectrum of Compromise between Replacing and Stacking
Loading...

While stumbling upon this thread about augments stacking/replacing I've noticed an interesting sidetrack in the comments tapping into potential compromises between the two, and it got me thinking. TLDR at bottom for the short-on-time.

Obviously, simple additive stacking of augment effects would be overpowered and break the game. This is because the game's dominant infinite progression system is paragon. The required total XP (in the long term a rough proxy for total invested playtime) for a target paragon level goes up with that level to the power of 3, which inversely means that your paragon level grows roughly with the cubic root of your XP/playtime.

The cubic root is a sublinear function, compared to a straight line it flattens off the further you go (though it never actually plateaus – you can still reach any height you want, you just have to go much further to get there than if it was a line).

If augments stacked additively, since you always need the same time to get a gem to a certain level, that would now mean your "power" would grow like a line (to the power of 1) rather than the cubic root (to the power of 1/3) with your game time. As such, this would completely overshadow and trivialize the paragon system.

So that is the mathematical reason while additive stacking would indeed be overpowered.

But hang on for a moment! Perhaps we could design a system which does allow successive augments to stack a bit, but doesn't break the game by outgrowing paragon? A sort of compromise that makes it feel less awful and like a waste of the previous gem/augment when you overwrite it, without dominating end game grind?

And sure enough, there are options for this, more elegant and smooth than stacking with a hard cap.

We first need to introduce a bit of formalism. Let T be the total aggregated effect from all the augments, and I_1, I_2, … be the individual effects of each individual augment. In this framework, the standing two aggregation mechanisms look like this:

  • replacing: T = max{ I_1, I_2, … } , assuming you'd never replace with a worse effect
  • adding: T = I_1 + I_2 + …

If we want to find a natural compromise somewhere between the two, we need to conceptualize both of these as special cases of one and the same general mechanism.

Huh, curious. When you compare the two, it's not at all obvious how or even if they can be generalized into one framework at all. But they can!

Ladies and gentlemen, let me introduce you to…

*drumroll*

The Lp-norm!

Look at it, it's amazing. If you've ever played Minecraft, you already have intuitive experience with three of them: Between points A and B, L1 is how many minecart tracks you need to connect them, L2 is how far you have to walk, and Linf is how many fields (maximal irrigated farmland from a single water block) fit between them. With the nice figure on the top right of the linked article section it makes sense.

So in short and to draw the arc back to our problem, for any real number p >= 1, we get an aggregation mechanism for augments as:

  • Lp: T = (I_1p + I_2p + …)1/p

If you plug in p=1, you get the "adding" case, and if you let p go towards infinity, you get the max term and thus essentially replacing. Now, between 1 and infinity, there's a lot of real numbers to choose from. Let's take a closer look at how that choice of p would affect the gameplay experience.

Within the bracket, the bigger this p is the more the game rewards you for pushing the gem level higher, even if those rifts take longer to complete. If p is too low, it results in the optimal gem level being rather low, where you're not making any concessions from your speed build towards pushing it a bit and instead just jump to leveling the next gem as far as it's convenient. This is bad because it disincentivizes pushing, an aspect that makes the augment system interesting as it is. On the other hand, if you make p way too large, the contribution of augments that were smaller than the highest single one gradually fade into irrelevance.

Загрузка...

Outside of the bracket, the bigger p is the smaller this exponent 1/p is, which "flattens the curve" (high five to my 2020 boys) of how much power you accumulate with total playtime. For any fixed value of p and current build strength, there will be some constant optimal gem level, and it will take some constant amount of time to get a new gem on it. So the total term inside the bracket will grow at a constant rate (i.e., linearly, to the power of one – regardless of p). The outside exponent 1/p is responsible for slowing this growth down.

We have established earlier that this should be smaller than 1/3 as to not compete with paragon in the long run. This makes 3 a reasonable lower bound for this new parameter p.

And there we have it! For any p>3, the Lp-norm is a compromise between replacing and adding. The slider can be freely adjusted to get the right balance between noticeable growth and reasonable limitations.

Let's look at some numerical examples to get a feel for it:

combinationp=4p=5p=8
70, 75868379
70, 75, 80999487
2 * 100119115109
100 * 100316251178

Multiply with 5 to get the mainstat. So these are effectively the level a single gem would need to have to yield the same effect as the listed combination.

I don't know about you, but I kind of dig this. Very open for what the value of p should be, it needs to be playtested. But for sure, something less than infinity to allow some mild form of stacking looks tempting.

Is it technically feasible? Yes, we can implement this without storing the complete set of past augments. The incremental definition looks like this:

T_(n+1) = ((T_n)p + (I_(n+1))p)1/p.

Constant memory and time demand for this operation.

One more worry one might have is, does this make it so even if I find a piece of gear that is better by itself, it's not worth it to switch anymore because I've already invested too much on its predecessor? Rephrasing this question, for any given factor A>1 by how much a new item is better, does there ever exist a time amount X already invested such that for these there exists no time amount Y at which the new item would break even?

A * Y1/p = (X+Y)1/p | :Y1/p

A = (1+X/Y)1/p | ^p

Ap = 1+X/Y | -1

Ap – 1 = X/Y | *Y

Y * (Ap – 1) = X | :(AP-1)

Y = X /(Ap – 1)

So no, for every A>1, p>0 and X, there does indeed exist an overtaking time Y, so switching will always still be worth it in the long run in such an aggregation system. Of course, you wouldn't immediately equip the new item, only once you've sufficiently stacked augments on it to actually break even.

That's it, I don't see anything else wrong with it myself anymore. What do you think? Would you like such a system? Or is there an argument I'm overlooking for why strictly replacing is better than even the mildest such form of stacking? I'd love to read everyone's thoughts on this!

TLDR there's a smooth spectrum of possible mild forms of stacking between the current system of augments only replacing each other that the game has right now and a radically overpowered simple additive stacking that would break it.

Source: Original link


Loading...
© Post "Augments – A Spectrum of Compromise between Replacing and Stacking" for game Diablo 3.


Top 10 Most Anticipated Video Games of 2020

2020 will have something to satisfy classic and modern gamers alike. To be eligible for the list, the game must be confirmed for 2020, or there should be good reason to expect its release in that year. Therefore, upcoming games with a mere announcement and no discernible release date will not be included.

Top 15 NEW Games of 2020 [FIRST HALF]

2020 has a ton to look forward to...in the video gaming world. Here are fifteen games we're looking forward to in the first half of 2020.

You Might Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *