>>56472
>>56447
>>56501
But let's imagine I made the tentacles into an array:
tentacles = []
for(i = 0; i < 8; i++) {
tentacles.push(new Tentacle());
}
Now I still have 8 tentacles, but they're all in one list together, which means when I need to update all of them at once, I can do this:
for(t of tentacles) {
tentacle.holding = 'none'
}
And best of all, this scales - in terms of maintenance - forever. With the old ways, the amount of work/code I have to do increases if I add more tentacles, the new way I just need to change my "number of tentacles to create" variable (8 in this case) to something else. I can even make it a configuration variable, or a member object, or some shit, so that it can be dynamically computed or loaded from a config file, or at least so that if I need to reference that value in multiple places it's always the same.
There's some other concerns here - for example, tentacle.holding should be null or undefined instead of "none" when it's not holding anything ... but that's outside the scope of the discussion.
Now, strictly speaking, the FIRST approach I did is actually the most "efficient" - all that object indexing and dereferencing actually does have an overhead, and the internal object structures are not free. But you'll often find that what is efficient" in computing terms is not efficient for a human being, and vice versa. There are'' times where increasing one increases the other, but it's NOT a given.
However, all the increased utility and safety (since less programmer work == fewer mistakes) more than offsets the very modest increase in resource allocation. In fact, aside from having a general awareness of the scaling mechanics of the data and logic one is working with, performance ought to be the last consideration. Making things easy to fix, and easy to understand, is far more important because then when you realize your shit sucks and it's too slow, you can go back and unfuck it much more easily than if it's a tangled mess.
This is especially important because 99% of the time you won't know the exact performance characteristics of your shit before it goes into production. So it's better to develop a latent sense of what kind of code/structuring is going to have favorable or unfavorable performance characteristics (a good programmer can profile shit he's writing in his head, as he writes it), but this should be in the "loose guideline" area, while making sure your crap isn't a mess should be an iron rule you only break if there is no other choice and there is a good reason.
So if you do it that way, once you actually have useful data on which to base your optimization, you can do it easily and efficiently, instead of fucking up your shit by trying to optimize it before you know what needs to be optimized, and then when you're inevitably wrong having to fight your own code to fix the mess you created.