Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:
You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.
I agree that normally this would be the best approach but we have to split the repos out as we are beginning to release multiple products on different schedules so having them all in one along with the core code is causing bigger issues.
Indeed. Theory often has to take a hike when it sees the projects we've inherited
When discussing how things will play out long term remind them that for sanity purposes alone they need to keep upto date with the latest core version for rapid security fixes if nothing else.
This is true. The strategy is to extract the core code into a separate submodule so the application can link to whichever version of the core code they want to use. This core stuff has been refactored so it is all under a sub-directory ready to be moved. Once this has been done and everything is settled, the rest of the code base will be split out until everything is separated.
That statement fills me with dread - no problem locking down a particular version of the code base to go with a release / hotfixes but you don't want to end up in a position where an upgrade to a new core version requires weeks of work because they haven't been keeping things up to date.
When discussing how things will play out long term remind them that for sanity purposes alone they need to keep upto date with the latest core version for rapid security fixes if nothing else.
The more I think about it the more I think you may be going about this the wrong way. You don't have a project with a number of submodules, you really have a number of separate projects all of which require a submodule containing the core code base.
This is true. The strategy is to extract the core code into a separate submodule so the application can link to whichever version of the core code they want to use. This core stuff has been refactored so it is all under a sub-directory ready to be moved. Once this has been done and everything is settled, the rest of the code base will be split out until everything is separated.
I agree that normally this would be the best approach but we have to split the repos out as we are beginning to release multiple products on different schedules so having them all in one along with the core code is causing bigger issues.
Your issue there is that you seem to be currently trying to split a monolith application when you haven't identified what the common core / baseline application is yet (noting your point that the code from the submodules needs to be built with the code from the main repo (it is all very heavily coupled)).
The more I think about it the more I think you may be going about this the wrong way. You don't have a project with a number of submodules, you really have a number of separate projects all of which require a submodule containing the core code base.
If you look to use a micro service architecture (especially if this is cloud/web based app) it may remove the need for tightly coupling the code base (you still need a deployment to fix a nuget package bug) and allow deployment of individual components as required.
Microservices is definitely the direction I eventually want to head!
I'm tempted to say "you need to sort out the architectural issues in the code before you look at sub-modules Vs Nuget". Splitting what sounds like a single code-base (or tightly coupled mess of code-bases) can be done in one repo.
I agree that normally this would be the best approach but we have to split the repos out as we are beginning to release multiple products on different schedules so having them all in one along with the core code is causing bigger issues.
I would look to use Nuget if the code is shared across multiple repo's, the ability to pull in or lock to a a specific version is a great feature.
One thing that sprang to mind is when dealing with a monolithic code base that is being split into smaller repo's. Is the use of Sub Modules/Nuget the right choice? As your might really be trying to decouple in name and could be adding a an extra layer of complexity that is not justified.
If you look to use a micro service architecture (especially if this is cloud/web based app) it may remove the need for tightly coupling the code base (you still need a deployment to fix a nuget package bug) and allow deployment of individual components as required.
Where I work we have one large repo which I am trying to split into smaller repos that are linked together.
, the code from the submodules needs to be built with the code from the main repo (it is all very heavily coupled).
Git can be tricky enough without messing around with submodules / multiple repos. Things you might run in to... if you do a release all the repo's or submodules need to be tagged at the same time with the same version. If there's a bug down the line, you might need go back to a previous release, and branch all the repos and apply the fix to that version. Becomes a PITA.
I'm tempted to say "you need to sort out the architectural issues in the code before you look at sub-modules Vs Nuget". Splitting what sounds like a single code-base (or tightly coupled mess of code-bases) can be done in one repo.
My concern was that Nuget seemed to be a way of bringing in libraries after they were built.
I need to bring in the relevant version of code so it can all build together, be debugged together etc.
Can Nuget do this? Information seems conflicting over it but most seems to imply not.
Where I work we have one large repo which I am trying to split into smaller repos that are linked together.
I was thinking of linking them using git submodules but that seems to cause other issues.
Someone suggested using Nuget but I am not sure that will quite work as, at least for the time being, the code from the submodules needs to be built with the code from the main repo (it is all very heavily coupled). Am I a misunderstanding? Is this a use case for Nuget?
Are there better technologies to use than git submodules for this?
Leave a comment: