Talk:Analytical versus Pragmatic Complexity Reduction in Software Engineering

From BITPlan cr Wiki
Jump to navigation Jump to search
For my PhD Thesis i would like to discuss analytical versus pragmatic complexity reduction in software engineering. That is when trying the bridge from user needs to delivered software we need to find out what the entities are that our system should work on. We can do this in a top-down fashion via project definition, requirements engineering and modelling e.g as OOA,OOD and OOI and then check the created artificats against the conformance with the needs. More often than not the process involved creating accidential complexity and e.g. trying to fullfill corner case needs of minor relevance introduced by non pareto fullfillying use case scenarios and typical for human social interaction in the realm of software creation. We can also apply bottom up approaches such as doing a systems analysis starting e.g. with the database structures and content or by doing process mining. Recent developments in AI and LLM bring us promising advantages in getting to a better cost/benefit relation and the option to better combine top down and bottom up. By e.g. considering N:M relations harmful that are only needed for corner cases, properly matching the top-down analysis results against available elements and using a knowledge graph of artifacts to do so we might be able to apply simple relevance and decision rules. E.g. considering elements only relevant if needed in 80% (4 out of 5 - pareto level 1) or 96% (24 out of 25 -pareto level 2) or 99.2% of 100% (124 out 125 - paretol level 3) of the cases or going for the same regarding availability - thus applying long tail distribution principles to get a proper effort/benefit integration of the cases at hand.  Give me a mediwiki markup page outlining the ideas above.