12

We have a large website made in PHP running in Apache on Linux. There are programmers as well as non-technical users modifying the website every day and we do it directly on the test server.

Is there any alternative to modifying the files directly on the server?

Sure, the programmers could all setup an Apache instance on their machine and manage it, but it's a bit of trouble as sometimes we add new extension to PHP or change configuration in Apache which means that everyone need to change those things manually. Also, it's not realistic to expect our non-technical users to manage their own Apache instance.

Another problem is that we are all on Windows computer, but the website runs under Linux. The code is not compatible with Windows so that is another issue. Switching completely to Linux is not an option either as we are using many programs that only runs on Windows for other tasks, so it would need to be a dual boot or a VM.

It feels strange to all work directly on the server, but is it the best way to go in our case?

Gudradain
  • 444
  • 5
  • 12
  • 3
    @gnat I don't see how it's related... – Gudradain Nov 05 '15 at 23:00
  • 2
    @Gudradain the accepted answer on the dupe target does answer your question. –  Nov 06 '15 at 03:38
  • 7
    What reason do 'non-technical users' have to be editing your website's code? – MrLore Nov 06 '15 at 10:50
  • 4
    Strange as it seems, this might be just fine medium term with two minor fixes: (1) use two test severs, one for dev, one for pre-production to make sure you can test stable code; (2) start using some vsc, like git - start with a nightly automated commit on dev server, manual push to pre-production server, and advertise from there. – Eugene Ryabtsev Nov 06 '15 at 12:07
  • 1
    @MrLore They are changing text, images and traduction mostly. True it should be separate and we tried to separate it as much as possible but it's still there directly in the PHP files... – Gudradain Nov 06 '15 at 14:31
  • @EugeneRyabtsev So, far I think you have given the most realistic solution. We are indeed using two test servers already, one for dev and one for pre-production. Adding automate git to that would be easy, give us a history of change and provide use with multiple backup if something go wrong. We could start from there and then add new servers by developer when they need to do breaking changes. – Gudradain Nov 06 '15 at 17:00
  • It sounds like your company needs to work on it's engineering process. You should have automated deployments that you can run on a VM on your development box. This should set up the PHP, Tomcat, etc with any needed plugins. Then your should have automated deployment based on source control. This way everyone can make changes on the server files on their local test instance, and then confirm that their deployment process works, before pushing the changes to a dev test server, pre-production, or production. It's worth investing in good engineering process. – Dan Nov 06 '15 at 18:48
  • Get Linux machines for the developers and use docker for server configuration management; it's an investment but will pay off. –  Nov 06 '15 at 19:09

5 Answers5

54

Everyone altering the application should be doing it in source control, and you should have an automated process for deploying a specific version from source control to your test server. It may require some persuasion to get your non-technical people to use source control, but their really is no alternative to keeping a non-toy system running reliably. If the step to check-in and deploy the code is simple enough, it won't matter that no one is working directly on the server. People like using git for this because git is fast and has developed an ecosystem of good end-user tools. Having people work directly on the server is a recipe for disaster.

antlersoft
  • 546
  • 3
  • 9
  • 3
    I thought about doing what you said, but doing a commit and then pushing it to the server to be able to test every little change that you make seems really non-practical. If I made a little typo in a JS file, I would have to recommit-repush just to fix the typo and test what I wanna test. The lack of local testing make it painful. – Gudradain Nov 05 '15 at 23:08
  • 4
    @Gudradain, you say that as if you had no other choice. You could set up two test machines, one that's running the latest "master" branch, the other that's running a specific version, such as your own branch. You could make your automated deployment tool be able to pick up your local source code and copy it to a clean folder on that test machine, etc. – sleblanc Nov 06 '15 at 04:09
  • 1
    @sleblanc So in other words, you have to either still live-code directly on the second test machine, or you still have to commit and push every time you fix a typo? I'm not sure you've fixed the original problem. – user253751 Nov 06 '15 at 10:19
  • 3
    @immibis, that's because the actual solution is to run locally if you want to quickly test against typos, without directly making changes to a server. sleblanc has suggested having two servers to get round this, so you can at least have some stability in one server. – Tom Bowen Nov 06 '15 at 10:39
  • 5
    @Gudradain Code should not be committed before it has been tested. Any workflow which requires code to be committed before it has been tested is flawed. Once the code has been tested it is to be committed in a distributed version control system - which could be git. The testing before commit need to happen in an environment which isn't used by anybody else. If many people are doing their tests in the same environment, you have a flawed workflow. – kasperd Nov 06 '15 at 11:51
  • 1
    @Tom.Bowen89 In OP's case "run locally" is not a solutin, it is a bag of problems quite larger then the original bag. Not sure why this answer gets so many upvotes. – Eugene Ryabtsev Nov 06 '15 at 11:56
  • @Gudradain There's no need for Apache to check trivial issues like typos. Existing checkers can be run against a user's copy of the source (locally or via a network share). This can be made into a pre-commit hook for version control, so commits can only be sent to the test server if the sanity checks pass. – Warbo Nov 06 '15 at 12:44
  • @EugeneRyabtsev, exactly. As the OP has ruled out the only decent solution, 2 servers is the next best. – Tom Bowen Nov 06 '15 at 12:45
  • 1
    If this seems like a lot of work to you, how much more work will it be when two people accidentally edit the same file one after the other and the site fails. You have no copy of the original (working) file to quickly roll back to. Depending on how hard the fix is, your site could be down for some time. At that point, doing a commit and then pushing to a test server will seem like a very good idea. – Paddy Nov 06 '15 at 16:33
  • 1
    @kasperd You pretty much get the issue with that answer. Still wonder why it's upvoted that much while there is answer that provide better solution for my problem like vagrant or setting up multiple test servers. – Gudradain Nov 06 '15 at 16:48
  • @kasperd, a blanket rule such as "no untested code shall be committed" is deeply counter-productive. I use local branches all the time to save random tests. If you make your deployment tool aware of such local branches, there is no problem in committing untested code in the local branch, for the purpose of using `git push` as part of the plumbing in your deployment. – sleblanc Nov 06 '15 at 19:41
  • @Paddy, as it is now, developers at OP's share a single test machine for all testing purposes. Adding a second machine allows them to have a "stable" (staging) testing machine and another "unstable" (development). Now, if they have problems sharing this machine, adding a third, fourth, etc. is also an idea. The tricky part is to conjugate convenience (no conflicts) and added workload (maintenance of multiple systems). An automated deployment system like Vagrant will allow you to build a VM on the fly, eliminating all such arguments. – sleblanc Nov 06 '15 at 19:43
  • @sleblanc If your version control system has the means to ensure none of the other developers working on the same project ever has to look at those commits, then it is ok to commit untested code. What matters is the list of commits you present to the other developers in the end, not what workflow you used to produce it. Each of the commits you do present to the other developers need to be tested and still small enough that each commit can be reviewed in a reasonable time. – kasperd Nov 06 '15 at 20:17
  • @kasperd, don't confuse the version control system with the typical central repository. Branches on the central repository have to use standard names. Some organisations will allow developers to push their own branches, often under a prefix (e.g. `sleblanc/my-feature`), and usually it is allowed to commit untested code (except 100% verboten stuff like passwords, obviously) that a clean VM can pull and test. If the central repository is not allowed to have user branches, the virtual machine can still receive your commits in your local branches by use of `git push`. – sleblanc Nov 06 '15 at 21:16
19

Use vagrant to set up the environment which mirrors the settings on the server, and let them play there.

How to get things back to the server, it's about deployment, that part is answered fine by @antlersoft, with the caveat: I don't believe you can persuade non-technical users to use source control (and git is one of the harder ones). If you need non-technical users to change web, invest time and money to find appropriate CMS and use that.

herby
  • 2,734
  • 1
  • 18
  • 25
  • 1
    From my quick read, it seems a tool like vagrant would solve our local setup problem. About GIT, I really don't think we can convince them either. A CMS has been in the making (by another team) for multiple years now but there seems to be no progress at all so it's a dead end on that front too. Maybe the non-technical could all work on the same server and the programmers on their local. Might be a good compromise for now. – Gudradain Nov 05 '15 at 23:16
  • 5
    @Gudradain: The devs should *really* consider using a VCS. VCSs exist since at least [1973](https://en.wikipedia.org/wiki/Source_Code_Control_System), it's not a shiny brand new hipster technology. If the non-tech guys cannot work the VCS directly, some technical people could merge their work with the VCS repo at regular interval. – ysdx Nov 06 '15 at 09:55
  • @ysdx is right about devs should do VCS and some deployment pipeline should put data in production, no live edits there. But you can get there set by step - if they will have their local environment with vagrant set up, they can play there until they get the change to work right, then push only that change in the VCS. Of course VCS should not be used in a messy way by committing each single change, commit structure should help understanding the changes (shamless plug: http://blog.herby.sk/post/commits). – herby Nov 06 '15 at 10:25
  • BTW, the github wiki are quite user-friendly but are [backed by git underhood](https://github.com/gollum/gollum). Yyou can provide a dead simple non-technical-user-friendly interface on top of a VCS and let the technical people use the real VCS. – ysdx Nov 06 '15 at 12:27
12

A fleet of virtual machines on a host, every user has his own instance to play with, then push from there to the main repository and make the real test server update from there.

Don't try to introduce git to non-technical users, you might end up with an axe in your skull one day. :-) Even to power developers who use it every day it adds such a high level of complexity that they fail. Tools like SourceTree are nice, but they still don't protect you from screwing up.

The most user friendly version control I experienced so far is TortoiseSVN, even non-technical users understand the svn up and svn commit commands when they get it presented in a clean graphical interface.

user203035
  • 129
  • 2
  • 6
    This is not really an answer to the question asked, but a (very biased) opinion whether to use git or svn. And honestly, its 2015 and I am baffled that you still argue against git. – dirkk Nov 06 '15 at 08:21
  • 3
    @dirkk There is nothing wrong with svn. Why shouldn't you use a tool if it fits for the job? I think that this answer does not just present an opinion on git, but reality as I have seen this too. – SpaceTrucker Nov 06 '15 at 08:33
  • 4
    @dirkk The first sentence is a perfect answer to the question. – glglgl Nov 06 '15 at 08:33
  • 2
    @dirkk http://xkcd.com/1597/ is dead on: it's very possible to get git into a state where a nonexpert can't recover other than by starting again with a clean checkout. Effective use of git requires a complete understanding of a complex system with confusing terminology. – pjc50 Nov 06 '15 at 09:40
  • 4
    @pjc50 I am not interested in this discussion git vs svn and I tried to make this clear. My point is that, apart from the first sentence, this is just another git vs svn answer, which has nothing to do with the question asked. It doesn't matter which vcs you use if you modify files directly on the server. I am also baffled how anyone could read into my comment that I argued anything is wrong with svn. I neither said or implied this as I think this discussion is not worth having and inappropriate here – dirkk Nov 06 '15 at 09:45
  • 1
    If this answer had contained only the first paragraph, I would have upvoted it. But I won't upvote a suggestion to use a centralized version control system. Yes it is possible for users to screw up a git commit, but that's true of any version control system. I have seen users screw up when using files with no version control. I have seen users screw up patch files. I have seen users screw up CVS commits. I have seen users screw up Perforce commits. I have seen users screw up Mercurial commits. – kasperd Nov 06 '15 at 12:04
10

The answer to your problem is twofold.

TL;DR: Use DTAP and implement a VCS.

Firstly, in an enterprise environment you never want to be coding directly on the server. Even if it's not the live environment, having multiple people editing files on the same environment gives you a very high chance of conflicting changes, which leads to unpredictable results. Fully implementing DTAP will help you solve this problem.

DTAP stands for "Development, Test, Acceptance, Production" and is used to describe a system where you have 4 separate locations where the code can be. The separation is designed to protect Production from as many possible issues as possible:

First the code is run and tested on Development (which can be a separate machine but is often simply the developer's own PC). If the developer is satisfied with his work, the code moves on to the Test environment, where someone else (ideally a dedicated tester but another developer can work) will try the code out. If this phase of testing finds no issues, the code moves on to Acceptance, where the "business" side of the company will test the code. This can be your customer if your company makes a product for another party, or it can be a different department that is in touch with what the product should do in order to match the end-customers' needs. Only if they accept the code will the code move on to the Production environment, which is the live environment.

Secondly, to keep track of the changes and to prevent problems that arise when multiple people modify the same area of the code at the same time, you should implement a VCS (Version Control System) in your organisation that will keep track of all the changes to all the files (examples are SVN, Git and Mercurial). These systems will also allow you to roll back changes if it turns out you made a mistake. To make life easier, it might be worth using your system of choice with an account at a service like Bitbucket which will allow you to use their interface for the process of combining one developer's work with another. All of your developers will have to agree on the specific strategy to use for merging the code into the state that will be moving on from development to the next environment, but I won't overload my answer with that.

When you say that non-technical people are modifying the test environment, I hope you mean they do changes to the configuration and/or content, not to actual files. Non-technical people should never be touching the code of the website, after all if they have no idea what they are doing they might accidentally break something!

With regards to the issue of compatibility and the statement that non-technical people cannot be expected to maintain their own Apache server, there are several options I can see. One option is to have one person manage the installation of these servers for all the parties involved, but this can be a full-time job and it might not fit your organisation to have such a person. The alternative is to use one of the available tools like Vagrant or Docker, which will allow one technical person to create an "image" that will have the working environment (which will typically be Linux based) and that image can be distributed to all the developers so they can run it on their own machine. If updates to the architecture need to be done, this person updates the image, and distributes it to the developers again.

Personal opinion: From the original post I infer that your company is past the state of "only 2 people are working on the site anyways so it doesn't matter what we do". These concepts I propose are widely accepted as good practices, as you can see from my explanation you will gain a LOT of added checks of the quality of the product, at the tradeoff of introducing some relatively slight additional complexity to your process. A version control system like Git or Mercurial may seem intimidating, but in my opinion, with some training literally anyone can learn how to work with it properly. All it requires is an attitude of willingness to learn instead of seeing the system like an enemy that is preventing you from getting your work done, the latter of which is sadly a lot more common than it should be. By learning how to use the tool properly, you are saving yourself the time that you would have to spend on correcting mistakes and manually sorting out how the work from multiple people should be combined into the code.

Cronax
  • 246
  • 1
  • 7
  • OP says "we do it directly on the **test** server". – Eugene Ryabtsev Nov 06 '15 at 12:00
  • @EugeneRyabtsev I've made the appropriate changes, let me know if I missed something. – Cronax Nov 06 '15 at 13:19
  • The programmers use GIT every day for other projects... It's just this one that has been a problem because of how hard it is to setup a local environment, the problem of the non-technical people editing the website files directly (it's bad I know but still our reality) and the difficulty to convince everyone (10-15) to change the way we have been doing it since forever. – Gudradain Nov 06 '15 at 14:26
  • @Gudradain What is the justification for having the non-technical people editing the website files directly? I can't imagine a valid case where this would make more sense than building some sort of interface for them that will be able to do stuff like validation etc. but I could be wrong. And if people say "we've been doing it this way since forever" remind them that if we keep thinking like that then we might as well throw out our computers and go back to stone tablets ;) – Cronax Nov 06 '15 at 15:33
  • We indeed have been doing it since forever and our "side project" of using a real CMS has failed for quite some time already... – Gudradain Nov 06 '15 at 15:39
3

First of all, version control is a must. But moreover any changes need to be tested before they are committed. And for those changes each user need their own test environment. So coding directly on the server is not a good idea.

How much separation you need between the test environment of each user depends on your specific needs. For some simple tasks you may achieve sufficient separation by having a shared test server running a vhost per user. If you need more separation you can configure a VPS per user.

For the most demanding tasks you may need a dedicated physical machine per user. If that means you need to purchase another physical machine for some users, you have to decide whether you want the other machine to be located at the user's desk or in a server rack.

I have seen users mess up commits in every version control system I have ever worked with. Some people are able to learn how to work with a version control system correctly, other people are not. If you have non-technical people who need to change some of the code base, but cannot learn how to use version control, there is another option worth considering.

You can let the developers have access to the non-technical person's test machines (either as a shell login or through a network file system), such that once a change is ready the non-technical person can ask a developer to commit the change.

kasperd
  • 289
  • 1
  • 2
  • 8