a multiplayer game of parenting and civilization building
You are not logged in.
Pages: 1
I spent quite a bit of time refactoring the transition code for objects with a certain number of uses.
That code now works fine even if both actor and target have a certain number of uses (using a mallet on a chisel, for example, correctly decrements the use count on both the mallet and the chisel).
But the underlying implementation still makes objects with 100s or 1000s of uses intractable. Two 100-use objects that interact would result in 10,000 auto-generated transitions.
Someone pointed out recently that the point of item decay was to complicate the process of long-term, trans-generational survival, not to burden individual lives with repetitive busy-work. If you make a tool, you should probably be able to use it for the rest of your life without making another. But the existence of your tool shouldn't exempt the village from making that tool forever. The forge shouldn't lay dormant, but you shouldn't have to keep going back to the forge yourself, during your life.
This means that tools need to last 100s of uses. The other option is to have them decay over time like baskets, regardless of use, but that feels weird. A tool on the ground suddenly breaks? And it also motivates people to use the tool like crazy while they can. Chop as many trees as you can, there's only 5 minutes left before this ax is toast.
The moment where a tool finally breaks from extended use is a great moment, and it feels right that it breaks during a "chop." I don't want it suddenly breaking while you're simply carrying it.
This could be accomplished with a purely random breakage system, but that also feels weird, because brand new tools can break sometimes.
The solution, without completely reimplementing the underlying transition engine, is a hybrid approach.
Each tool can now have a small number of true useCount states, along with a probability of transitioning between those use states.
So, an ax might have 4 uses, with a 0.1 of transitioning each time it is used, giving it 40 chops on average, but 4 chops in the absolute worst case. If the probability was instead 0.01, the ax would have 400 uses. 0.001 would give it 4000 uses. All without introducing any additional complexity blow-up in the underlying, auto-generated transitions.
A berry bush has 6 uses, with a 1.0 use chance, meaning that it is decremented every time it is used. But I could also give it a 0.75 chance, meaning that 25% of the time, you get a free berry when you pick one.
The use engine already supports sprites that appear or vanish with use (like berries on the bush or cracks on a tool). For tool usage, those cracks can also coincide with the true state transitions. A 4-use ax can get a crack each time it gets closer to breaking, for example, but the number of uses between cracks appearing is uncertain.
Offline
Thank you! Much appreciated. I guess the rare axe that gets 4 uses had flaws smithed into the steel lol
Be kind, generous, and work together my potatoes.
Offline
It's a little unclear which solution you decided to implement. I can't say i like the idea of it being random, and the way you explained it meant that it's only random because it's easier to code it instead of better game design.
Offline
Looks like he chose the one I was writing about, so you can have another look at that. It’s not a bad solution at all. The way I read Jason’s explanation, he is saying that both fully random and fully determined are worse solutions than what a combination of them is. I for one agree with that. For one thing, it’s more realistic. Let’s try it out and see if it feels good. Otherwise I’m sure there will be new updates.
Offline
When I read the thread yesterday. I also thought that the combination of randomnes and different sprites is the best solution. No need to change the whole engine, no big mess, no tracking, just modify what already existed, fast implementation.
Then again I just wonder for more advanced objects what is the absolute limit of minimum uses you could implement before it gets a mess again?
A better tool or a car for example that breaks after 4 uses would be insane
But you could go around this problem if you make reperations easier. I think thats what gus also wrote yesterday?
I just wonder what your thoughts are about this?
Its a rought world - keep dying untill you live <3
Offline
That was the idea I suggested in his other thread. The problem was the way he was doing to it wasn't very effective code wise and would be a pain to fix. Doing it completely random was really easy and efficient but there was a chance a chance of a tool breaking on the first use which would really suck. So the hybrid method negates the worst case scenario and gives the tool a minimum number of uses, while remaining easy and efficient to code into the game.
Offline
My hammer was frail because I was the chick blacksmith at the tiny camp.
I made axes, shovels and chisels.
And when I tried to create a blank file, my hammer was cut off. * Clink *
I got an adze that did not grow to a blank file.
I learned a lesson but I was 60 years old.
Last edited by JS (2018-04-28 02:37:16)
Offline
Well, the hybrid approach is even better in terms of worst cases, because of the magic of independent events.
If an ax has a 0.01 chance of failing, you expect it to last 100 chops. But out of 100 axes, you expect one to fail on the first chop.
If you split the chops up into four batches and have a 1/25 chance of transitioning to the next batch, you still expect 100 chops on average before failure, but you only expect a 4-chop ax once in about 400,000 axes. It would have to fail on the "first" chop four times in a row, which has a (1/25)^4 chance of happening.
And little realizations like this is the kind of stuff that I live for as a programmer and game designer.
Offline
If an ax has a 0.01 chance of failing, you expect it to last 100 chops.
That's actually not the correct math for an axe with 0.01 chance of failing, I would say it's wrong to expect 100 chops out of it.
On average you can expect 100 chops - that's true, but averages aren't a great metric for single use-case.
Out of 100 chops - the probability of none of the chops failing would be 0.99^100 - that's approx 36.6%
So the probability of at least one (one or more) chop failing would be approx 1- 36.6% - so that's 63.4%
With a chance of failure of 63.4% I wouldn't really be counting on my axe actually lasting 100 chops, the opposite is more probable.
If I wanted an axe that lasts at least 100 or more chops, with 0.01 failure chance per chop, I'd probably make multiple axes...
http://www.pstcc.edu/facstaff/jwlamb/Ma … sch4.5.pdf
P.S. Otherwise I think the idea is great, just wanted to clarify that averages may be misleading.
Last edited by KucheKlizma (2018-04-28 06:56:32)
Offline
The solution, without completely reimplementing the underlying transition engine, is a hybrid approach.
I did not read the source code, i have no idea how your game works and i am not the perfect programmer.
But this sounds like a dangerous problem to me.
Whenever i decided to not rewrite code in the past (because its super annoying and takes alot of time), i regretted it and in the end i rewrote the code anyways.
But this made me invest alot of time in dealing with a bad system, that i could have saved if i would have rewritten it immediately.
For me personally its no problem to wait several months before you make the next update (but i know alot of people see this differently)
Offline
This solution is brilliant in its simplicity and embraces the best of the current system and RNG. I don't think you need to panic about Jason paving over cracks and having to refactor everything at increased cost later. This change is miniscule in terms of code change, but adds a whole lot of depth to the game. It's almost cheating, really. But worst case, if he has to come up with a better system later, it won't be any harder to refactor anything.
Edit: TMPL (today my phone learned) refactor is a word.
Last edited by Uncle Gus (2018-04-28 09:03:40)
Offline
This solution is brilliant in its simplicity and embraces the best of the current system and RNG. I don't think you need to panic about Jason paving over cracks and having to refactor everything at increased cost later. This change is miniscule in terms of code change, but adds a whole lot of depth to the game. It's almost cheating, really. But worst case, if he has to come up with a better system later, it won't be any harder to refactor anything.
Edit: TMPL (today my phone learned) refactor is a word.
From his initial explanation I was actually more worried if it's efficient or if it could throttle performance to have a a bunch of "dummy objects" generate, as explained. Rather than functionality/structure wise. Does the entire object have to be loaded into memory multiple times? But then again it's more important how it look post-compile, as the compiler DGAF most of the time anyway.
I wish I wasn't a complete amateur and I could just check it myself, I'll prolly give it a prod at some point anyway just for fun.
Offline
jasonrohrer wrote:If an ax has a 0.01 chance of failing, you expect it to last 100 chops.
That's actually not the correct math for an axe with 0.01 chance of failing, I would say it's wrong to expect 100 chops out of it.
I was using "expect" in the mathematical sense, as in "expected value."
Which, for an ax with a 0.01 chance of failing on each chop, is exactly 100 chops.
If you roll a d20 die, and you keep rolling until it roles the number 13, you expect to roll it 20 times before the number 13 comes up.
Now, in terms of it matching our intuitive understanding of what we "expect" the ax to do.... it's pretty close. 50% of the axes will last less than 100 chops, and 50% of them will last MORE than 100 chops.
I think what you're getting at is that with a geometric probability mass function, so much more of the "weight" is below the expected value of 100. We can experiment with this tool:
https://homepage.divms.uiowa.edu/~mbogn … /geo1.html
The distribution has a very long, thin tail above 100, which actually never goes to 0 all the way to an infinite number of chops. That tail balances a very thick head below 100 chops.
This means that any particular number of chops below 100 is way more likely than any particular number of chops above 100. For example, 50 chops is 3x as likely as 150 chops.
Offline
The decay seems much better so far. Still have to see................................................. the future.
I got huge ballz.
Offline
What you said is absolutely correct, I was referring to "expect" as a non-mathematical concept for when you have a single axe. Practical application.
Anyway this is already addressed by having the chances change between 4 batches of chops, so I'm kinda going off on a tangent here...
With a single axe, the conditional probability of getting your axe 100 uses is still going to be working against you and it's an unrealistic expectation to get exactly 100 uses.
But for example if you target 67-68 chops you'll be close to 50%-50% conditional probability, even if the median is 100 chops.
It's similar if you have a coin flip to work with. If you flip the coin once, you won't realistically expect an average outcome of 50% tail and 50% heads. The coin is going to have to land on one side and either outcome is going to be outside the median.
If you have two coin flips and the first already landed heads - again you won't realistically expect a median outcome of the coin landing tails. It still going to have the same 50% chance of landing on either side. So it's still entirely possible it lands heads.
The conditional probability of having median outcome from 1 coin flips is 0% - impossible.
The conditional probability of having median outcome from 2 coin flips I believe is 50%.
Basically all I'm getting at is that averages work best when they have room to work with. In a single chance roll you can truly expect outcomes in the full spectrum of possibilities and the median is not the most probable outcome. It's more probable you'll get a non-median outcome at that stage. Which is why a lot of people get really angry from RNG, as they unrealistically expect immediate average outcome, which is not how averages work.
Offline
Looking at this further, each "batch" of uses is an independent geometric random variable. If p is the chance of moving on to the next batch with each use, we expect 1/p uses before moving on to the next batch. Our variance is (1 - p) / (p^2). And that is where the magic happens.
For independent random variables, the expectation of the sum of the variables is the sum of each variable's expectation. Same with the variance.
We can achieve the same expected value of "100 chops" with a single batch with 1/100 chance of failure, four batches with 1/25 failure, or ten batches with 1/10 failure. They all have the same expected value, due to the summing of the expected value of each batch. p gets bigger and bigger with the number of batches, but the expected value of the total sum does not change.
However, because the variance includes a p^2 term in the denominator, as p gets bigger, the sum of the independent variances shrinks.
For p = 1/100, Var = 9,900 (std deviation = 99.498 )
For four batches of p = 1/25, each batch has Var = 600, for a sum Var over all four batches of 2400 (sum std deviation = 48.989).
For 10 batches of p = 1/20, each batch has Var = 90, for a sum Var over all 10 batches of 900 (sum std deviation = 30).
In the most extreme case, we could have 100 batches of p = 1/1, where our expected value is still 100, but our Var shrinks to 0.
I recently shipped the iron mine with a single batch and a 1/10 chance of failure. It was supposed to feel like a gamble, but an expected value of 10 sounded like a lot of iron. But with a variance of 90 (std deviation = 9.48), we can see that we expect a very wide range. I'm fixing the iron mine to have more batches now.
Offline
Pages: 1