@cstanhope Put everything in version control.
Unless you have a detailed understanding of what the code was originally intended to do and how it was to be used, you are in no position to judge the code as bad. It is merely inconvenient.
Popular refactoring strategies may not work especially for procedural or scientific code. "Best practice" that can't be applied in your specific case isn't best practice.
Many of the "bad" design decisions are likely because the code's requirements were never updated and the code was designed and built around long-gone limitations - in hardware, tooling, operating system, state of the art, local state of knowledge.
Document and archive the build process, especially compiler flags. Recovering those from a lost build environment is painful.
Focus on integral tests against whatever production, demo, or test cases exist.
Unit tests are a developer convenience; what you need for recovery and refactoring are acceptance tests.
Refactor for clarity not performance - if you don't have a performance problem, don't bother. If you haven't profiled the code, you don't know where bottlenecks are and you're just guessing and wasting effort.
Static analysis, linting, and code reformatters are your friends. Automatic API documentation tools are your friends but the bigger win is understanding the interface of each routine and the mutability of arguments and imports.
Use your platform or language's native packaging tools. Ship a standard installer not a box of loose parts. Define versioned releases and ship packages; users are often not developers so expecting them to understand git is not acceptable. A `curl | sudo` build/install process displays a critical lack of skill, care, awareness, and competence. Use of conda and containers (for non-server applications) are a red flag that an application has unmanageably complex dependencies and setup and usually indicates lazy design and poor understanding of the code and the deployment process.
With integral acceptance tests, a repeatable automated build process, and goid version control, initial gross refactoring may now be possible. Avoid large-scale code churn but understand it might be necessary very occasionally - this is why linting and automatic formatting should be done prior to every commit along with testing.
Focus code evolutions; make one very specific modification at a time. Use work planning tools (kanban board, issue tracker, etc.) to strictly define and document change scope, purpose, and acceptance criteria. Importantly, document what's explicitly _not_ in scope. Write these work plans yourself - do not allow users or managers to plan your work. Resist scope creep; better to revert a half-completed targeted revision and rescope and replan the evolution than to uncontrollably widen scope into a change too large and difficult to test or review.
This is a long tedious thankless process and often needs to be treated as a labor of love.
If you have the time, practice code recovery on real applications. Practice and experience make refactoring and recovery easier and safer.
A huge goal of this work is to understand the code at a deep level amd be able to communicate and document that understanding. AI tools cheat you out of that understanding and experience and launder out the subtle cues and evidence of the code's design and intent. If anything they will make the code worse.
Code recovery and revitalization is tedious and painstaking but can be intensely rewarding especially for the depth of knowledge and skill you build and the redundant greenfield development you avoid. Sometimes a rewrite is necessary, often it's simply not worth it because it will cost more and be less dependable than the legacy code.