NB: The question asked specifically mentions git, but the question could probably apply to version control generally, so I've tried to write a generic answer, although it certainly applies to git.
The problems solved by Version Control Systems (VCS) apply equally to developers working on their own as in a team; the only difference between a solo developer and a team of developers is that the consequences of a whole team working without version control tend to be far more chaotic and time consuming.
I'd personally suggest to any software developer that they should always keep their work under version control (Git is a good choice), as it generally costs nothing in terms of time and effort (often saves time in fact), yet the risks of not using version control tend to be substantial in terms of the amount of time and effort needed when you discover a mistake or need to juggle multiple different tasks at the same time.
My current method of versioning is that I have a sub folder included in my path that houses all my subfunctions. My scripts are in the current working folder. Whenever I decide to make a radical change (cleaning up redundant files, merging functions together, breaking functions into smaller functions, etc.) i.e. trying to get things more organized, I copy the whole working folder contents into some archive called, for example, "Version3", and then continue on making changes in my current working folder. Is this something that could be better managed using GIT?
In a VCS, you would stop all of this copy+pasting and instead keep your main code in a "master" Branch (sometimes called a "trunk" or "root") to act as your main/current-working code line.
If you are implementing a high-risk breaking change that you don't want in your trunk, a typical workflow using version control would be to create a separate branch (based on the trunk) to develop and test those changes in isolation until you're finished and feel that it's time to re-integrate.
As far as your basic workflow is concerned, you can approach it in a similar way that you do with your folders, just without making all those unnecessary copies of folders, letting the VCS do the work of figuring out what's changed and when.
The problem with folders is that the only tool you have to manage them is the filesystem on your O/S - they're a blunt instrument. A folder is a huge snapshot of everything; your O/S filesystem probably has no concept of change management or version history.
VCS tools record deltas (i.e. time-stamped changes) known as a Commit. At a simplistic level, a VCS is typically a repository of changes-over-time, which you can view and access like a historical timeline, including the fork-points, as well as being able to 'travel' back in time if necessary.
A VCS provides a Merge facility to reintegrate those isolated changes into the master/trunk; merging typically happens when changes in a branch are stable (i.e. you've tested them, they seem to work, and are ready to be released).
Alternatively, if you've fixed any bugs or introduced some minor features into the master/trunk since creating the branch, then the merge tool will let you bring those separate branches up-to-date as well.
If you have modified the same code in your trunk and your branch, then the merge process will attempt to understand changes on both sides (usually simple changes such as whitespace/formatting can be resolved automatically).
Sometimes merge conflicts are complex - for example, you might have deleted a chunk of code in one branch and refactored the same chunk of code in another branch; in which case you've got two contradictory changes, and your human judgement is needed to figure out how to resolve the conflict.
Also, how does GIT handle things if I want to change a function from
say
function T = datatableimport(fileName)
to something like
function T = datatableimport(fileName, lowerDateLimit, upperDateLimit)
where the contents and arguments of datatableimport are different?
This would presumably break the code in older scripts that use the
function. Would it be better to make a new function instead called:
function T = datatableimportnew(fileName, lowerDateLimit, upperDateLimit)
and keep the old one in a 'legacy' folder, so that things continue to
work? I'm just unsure about the best way to go about organizing and
handling such circumstances is, and any help would be appreciated.
Thanks!
To put it bluntly - git can't really do much to help you here, and there's not really any "right" or "wrong" answer, as it depends on the severity of the change.
You could go and 'fix' the legacy scripts, so long as you're comfortable that it's not going to snowball into a whole mess of changes; but in many cases, the best thing to do with legacy code is leave it alone and just choose a different name.
Unrelated to the question, but more on maintaining legacy code:
When you're forced to make changes to legacy code, the best way to protect against breaking something is to ensure that your code is covered by Automated Tests where possible - then you can quickly identify when something is broken.
If nothing else, then at least try to work with tools which are able to perform Static Analysis checks, if any are available to you.
Also try to stick with language features which maximise the chance of errors being picked up by the compiler/interpreter, and consider setting up a Continuous Integration environment to run those tools against your code on a regular basis (e.g. every night).