Jump to content

Errors while installing reduced reputation increase


onerous

Recommended Posts

SCS has components that improve in-game speed relative to what they could do to it. Case in point: the "Smarter mages" component of SCSII generates about 90 mage scripts, averaging 10,000+ lines each. Compiling those scripts SSL->BAF and BAF->BCS takes quite a while (about 10 minutes on my relatively fast computer). I could get that 10 minutes down to 20 seconds by only having one mage script for all mages (that's how Tactics does it, and it's how earlier versions of SCSII did it.) But the benefit of spending the 10 minutes is that in-game mage scripts are much shorter (usually 30%-60% as long) as they would be on the old method, and that leads to a noticeable decrease in in-game lag.
Increasing install processing time (within a "non-ridiculous margin" at least - however you wish to interpret that) is wholly acceptable to decrease runtime processing. Indeed, I think I was amongst the first to suggest the ends, though probably not the means (as a result of my SCS testing, like what, 6 years ago now :)).
I'm interested in general principles of mod design, and because I have a philosopher's habit of being more interested in the strengths and weaknesses of the arguments for a conclusion than in the conclusion itself.
As I suggested above, I'm probably less interested in "coding philosophies" than in code that works efficiently. And that can take many forms depending on individual circumstances rather than some overarching set of beliefs. However, if I do have some sort of "philosophy" it's essentially lean software development.
Goals of lean software development

* Improve quality: Quality is both a goal in itself and a result of the other goals.

* Eliminate waste: Waste is any activity that consumes time, resources or space but does not add any value to the product:

. - Transport (moving resources not actually required to perform the processing)

. - Inventory (all components, work in process and resources not being processed)

. - Motion (resources moving more than required to perform the processing)

. - Waiting (waiting for the next processing step)

. - Overproduction (producing output ahead or in excess of demand)

. - Overprocessing (unneeded activity resulting from poor tool design)

. - Defects (the effort involved in inspecting for and fixing defects)

* Reduce time: Reducing the time it takes to finish an activity from start to finish is one of the most effective ways to eliminate waste and lower maintenance effort.

* Reduce maintenance: To minimise maintenance, output and process only what is necessary. Overproduction and overprocessing increase maintenance due to augmented processing and storage requirements (transport and inventory).

My interpretation is probably closer to lean manufacturing, which is where the concept came from originally. Now there are all sorts of development philosophies, and I'm not saying this one is above criticism, nor is it a set of immutable laws. Yet I probably keep these guidelines in mind during development (at least subconsciously) unless there's a good reason to adopt a different approach (and indeed, I probably break these "rules" quite often). I also realise modding is a hobby rather than a large commercial project, but it can still benefit from the same methods. I'm not saying you disregard (all) of these points either, and I hold your mods in high regard. But maybe this helps you see where I'm coming from. I'm not out on a limb somewhere, as this is a fairly common methodology.
Link to comment

So putting what I've been saying in that framework, coding elegance and logical clarity are valuable because (a) they reduce time; (b) they reduce maintenance (of the code); ( c) they reduce "the effort involved in inspecting for and fixing defects".

Link to comment

I never said I wasn't in favour of code clarity (though that is often relative to the coder, particularly in WeiDU). It's just that sometimes the other goals outweigh that (such as 'transporting' or in the above case decompiling/recompiling 'resources not actually required to perform the processing'). (Also in the above case, I don't think the revised code is overwhelmingly less clear than the original code.)

 

Also, you're considering 'reducing time' a goal pertaining to writing the code. It also pertains to processing the code (both are valid aspects of that goal).

Link to comment
I never said I wasn't in favour of code clarity (though that is often relative to the coder, particularly in WeiDU).

Sure, but SCS is a single-authored project, so my definition is fairly crucial here.

 

It's just that sometimes the other goals outweigh that (such as 'transporting' or in the above case decompiling/recompiling 'resources not actually required to perform the processing').

Agreed in principle. But to reiterate, SCS install time is dominated by other factors anyway.

 

(Also in the above case, I don't think the revised code is overwhelmingly less clear than the original code.)

Agreed.

 

Also, you're considering 'reducing time' a goal pertaining to writing the code. It also pertains to processing the code (both are valid aspects of that goal).

Oh, ok, that's a semantic ambiguity. I think they're very different goals, actually, albeit both are relevant, but I lean heavily towards the former in my own internal calculations.

Link to comment
Sure, but SCS is a single-authored project, so my definition is fairly crucial here.
I'm not sure I caught your actual definition of what "code clarity" is, though there may have been an oblique example or two above (but as you know, an example does not equal a definition). If it's primarily code that is readable to the coder, and there's only one coder, even that is relative. I imagine five or ten years ago, you wouldn't've been able to read your current code, so it's just a matter of gaining familiarity with the syntax.
Also, you're considering 'reducing time' a goal pertaining to writing the code. It also pertains to processing the code (both are valid aspects of that goal).
Oh, ok, that's a semantic ambiguity. I think they're very different goals, actually, albeit both are relevant, but I lean heavily towards the former in my own internal calculations.
Different goals perhaps, but closely related, yes. The same development methodology can apply to:

a) the efficiency of the coder

b) the efficiency of the code

Link to comment
Sure, but SCS is a single-authored project, so my definition is fairly crucial here.
I'm not sure I caught your actual definition of what "code clarity" is, though there may have been an oblique example or two above (but as you know, an example does not equal a definition).

I don't have a definition to offer; I'm piggybacking off the notions of elegance and logical clarity that apply in mathematical reasoning.

 

Also, you're considering 'reducing time' a goal pertaining to writing the code. It also pertains to processing the code (both are valid aspects of that goal).
Oh, ok, that's a semantic ambiguity. I think they're very different goals, actually, albeit both are relevant, but I lean heavily towards the former in my own internal calculations.
Different goals perhaps, but closely related, yes. The same development methodology can apply to:

a) the efficiency of the coder

b) the efficiency of the code

 

I'm unpersuaded. It's dead easy to come up with tasks where some long complicated bit of code runs quicker than some much shorter cleaner piece.

Link to comment
It's dead easy to come up with tasks where some long complicated bit of code runs quicker than some much shorter cleaner piece.
If it's difficult to produce such code, then one has to analyse the payoffs of coder efficiency vs. code efficiency. But if (as you say) it's "dead easy" then what's the problem?

 

If the issue is with "code clarity" then see my previous comment. As long as the coder can read the code, there wouldn't appear (to me) to be an issue. (Again, a relative issue: sometimes I have trouble reading three lines of simple code that I wrote vs. dozens of lines of fairly complex code that I wrote. Usually it has to do with poor code commenting on my part.)

Link to comment

This is a trivial implementation of string copy in C++:

void copyString(char *to, char *from, int count)
 do {						  /* count > 0 assumed */
*to++ = *from++;			/* Note that the ''to'' pointer is NOT incremented */ 
 } while (--count > 0);
}

 

This is the same operation implemented using a Duff Device:

void copyString(char *to, char *from, int count)
{
 int n=(count+7)/8;
 switch(count%8){
case 0:	   do{  *to++ = *from++;
case 7:			   *to++ = *from++;
case 6:			   *to++ = *from++;
case 5:			   *to++ = *from++;
case 4:			   *to++ = *from++;
case 3:			   *to++ = *from++;
case 2:			   *to++ = *from++;
case 1:			   *to++ = *from++;
}while(--n>0);
 }
}

Depending on a number of conditions, copy-pasting the code of the latter function could be 4x faster than calling the former function; yet, unless doing so implies a relevant speed increase on the total processing time, using the faster version kills code clarity more than it increments code speed.

 

Moreover, starting from about BG2-era processors and compilers, the former version became actually faster than the latter, so you ended up with developers blindly pasting in slower, less clear code (if you want a BG2 equivalent, it's using ACTION_FOR_EACH on 4000-some CRE files because "regex is slow").

Link to comment
developers blindly pasting in slower, less clear code (if you want a BG2 equivalent, it's using ACTION_FOR_EACH on 4000-some CRE files because "regex is slow").
Oh, so now we've gone from using A_F_E on ~300 CREs vs. ~4000? There's quite a difference. If your goal is to patch ~10% of CREs in the game, there's no need to (attempt to) transport 100% of them. If, on the other hand, your goal is to patch a good majority of them, there's no need to specify such a precise list.

 

Moreover, you haven't convinced me that regexp is actually anything near as fast as A_F_E (or indeed, faster, as you seem to be suggesting - that would be a neat trick that would make A_F_E obsolete).

 

To give a concrete example, Infinity Animations uses A_F_E to make "surgical strikes," if you will, on CREs of which it wants to change the animation. Changing this to regexp would require looping through all CREs in the game (including all modded ones) for every animation ID. This would make the mod uninstallable. As it is, the mod's install is somewhat tolerable (the longest step in the process is probably downloading several gigabytes worth of animations). However, I feel there exists the potential within WeiDU to improve the process nonetheless, and avoid such a hardcoded list of CREs for each animation, but I doubt it would involve doing a blanket COPY_EXISTING_REGEXP GLOB for each CRE-animation loop (Feel free to send me a revised tp2 with revised "banana-shaped" syntax though :).) As for code clarity, I don't think using C_E_R would make it any clearer than using A_F_E. On the contrary, the current A_F_E implementation shows you exactly which CREs it's patching, and there are reasons some specific CREs are excluded from the A_F_E loops that would take some pretty contrived (more complex and less clear) C_E_R syntax to exclude.

Link to comment

Using AFE because you know you must patch those 200 specific CRE because they are in-game Svinferblins instead of Gnomes is logical (you have no way to know if a CRE file is a Svinferblin or not).

Using AFE to on 200 files because those 200 are the only known cleric/mages is borderline (it's noticeably faster than regexing through all CRE files, but you must update the list every time new mods come out).

Using AFE on ~8000 CRE files (aurora/lib/t-random.tpa) is plain illogic: you save maybe 3 seconds vs. using C_E_R, must rebuild the list every time new mods come out, and explicitly filtering out undesirable CRE files is clearer.

 

PS: try saving aurora/batchlog/* as inlined (the latest version has APPEND_OUTER -) and only writing the list on disk when closing the install.

Link to comment
Using AFE because you know you must patch those 200 specific CRE because they are in-game Svinferblins instead of Gnomes is logical (you have no way to know if a CRE file is a Svinferblin or not).
Right - it has to be hardcoded anyway.
Using AFE to on 200 files because those 200 are the only known cleric/mages is borderline (it's noticeably faster than regexing through all CRE files, but you must update the list every time new mods come out).
Also possibly right, but again, there would be a C_E_R scan for each animation, which would be stupidly slow, unless you've done something godlike to regexp. Early on, I considered having just one C_E_R loop that goes through all CREs and patches their animations based on various requirements, but rejected it for whatever reasons. One being each animation requires you have the animations files present, so it really would require a C_E_R for each animation, or a complete recode of the way the mod works (feel free to do that - the current implementation actually does work in most cases unless you have some obscure codepage, and we're working on that). But the mod is component-based anyway for good reason, so it would involve at least a C_E_R for each component (of which there are many).
Using AFE on ~8000 CRE files (aurora/lib/t-random.tpa) is plain illogic: you save maybe 3 seconds vs. using C_E_R, must rebuild the list every time new mods come out, and explicitly filtering out undesirable CRE files is clearer.
Quite possibly, but actually I don't care much if it doesn't account for CREs in newer mods. What it boils down to is either you will find someitem on somecre or not. As it is, the list could probably stand to be trimmed down quite a bit, as I don't much like the idea of adding even a single script block globally to (pretty much) all CREs. Believe it or not, I have seen slowdown in doing this, because the engine is apparently so stupid it needs time to "think" to figure out whether to assign our items randomly or not.

 

Having said that, it probably could stand to be improved a bit - I believe nearly all code (and documentation) could, and I have no real qualms about it (unlike some coders, who have a sort of "Thou Shalt Not Touch Working Code") philosophy. It works quickly, and the A_F_E is a single line of code (regardless of how many arguments we pass to it - personally I feel listing these each on separate lines is stupid but to each his own). So what exactly is your issue with it, apart from "philosophical" modding concerns? Clearly, you've spent some time analysing it :).

PS: try saving aurora/batchlog/* as inlined (the latest version has APPEND_OUTER -) and only writing the list on disk when closing the install.
I think I understand what you're saying but am not completely sure, so again, a better description of the problem (and more verbose solution) would help. I suspect it would involve a non-trivial code change (though again, I'm not adverse to doing this if there's a real payoff in doing so).
Link to comment
Also possibly right, but again, there would be a C_E_R scan for each animation, which would be stupidly slow, unless you've done something godlike to regexp.

I was talking in general - a mod that (for whatever reason) gives -2 to saves to Cleric/Mages, +1 thac0 to Fighter/Thieves and an extra item to Golems could use a single C_E_R with a tower of PATCH_IFs (or a PATCH_MATCH), to be able to account for mod creatures; this is noticeably slower than an AFE, but clearer to the coder and reader, and can consider newer mods.

Moreover, if you have ten components that must patch a set of creatures (and you feel that using C_E_R for every component is too slow), you could have the main component do a single C_E_R pass to build the list for each component (see: level1npcs).

 

I think I understand what you're saying but am not completely sure, so again, a better description of the problem (and more verbose solution) would help. I suspect it would involve a non-trivial code change (though again, I'm not adverse to doing this if there's a real payoff in doing so).

For a typical CRE/ITM/... patching mod, most of the time is spent reading and writing to the disk; if you know you must APPEND to a given file a thousand times in a single component, it's faster to do so in an inlined file and write the resulting file on the disk only at the end of the component.

Link to comment
For a typical CRE/ITM/... patching mod, most of the time is spent reading and writing to the disk; if you know you must APPEND to a given file a thousand times in a single component, it's faster to do so in an inlined file and write the resulting file on the disk only at the end of the component.
Yes, I suppose so. But I've actually tested this on some primitive computers as well as some faster ones and whether we log via APPEND or not actually doesn't seem to make a difference in install speed at all. Nevertheless, I could change it in some (umpteen * x) lines of code, or you could in fact change it so that APPEND_OUTER writes to memory rather than to disk except on the final write (the same way I suppose that someone suggested READ_2DA_ENTRIES_NOW/LATER etc. and similar such command differences).

 

Though of course, I would not suggest that if it breaks existing mods. Also, a caveat: if a WeiDU component fails to install (for whatever reasons, which can be many) it will reverse all its changes and writes. There are a lot of times I will monitor and copy an APPEND_OUTER log in progress to see what it's written so far and (in theory, if I catch it in time) where it fails. Writing values to a variable rather than a file won't catch this failure, unless there's some obscure debugging syntax I'm missing.

Link to comment
On the contrary, the current A_F_E implementation shows you exactly which CREs it's patching, and there are reasons some specific CREs are excluded from the A_F_E loops that would take some pretty contrived (more complex and less clear) C_E_R syntax to exclude.

I've been using arrays to exclude specific files from C_E_R patches that can't be weeded out normally. Checking a variable is presumably one of the fastest operations available, so I do something like this:

 

OUTER_SET $no_patch(~FILE1~) = 1
OUTER_SET $no_patch(~FILE2~) = 1

COPY_EXISTING_REGEXP GLOB ~.+\.itm~ ~override~
 PATCH_IF (NOT VARIABLE_IS_SET $no_patch(~%SOURCE_RES%~)) BEGIN
// patch away
 END
 BUT_ONLY

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...