The .gitlab-ci.yml file is the one you are familiar with, responsible for
rendering and triggering the child pipeline created with gcix.
The latter is written into the .gitlab-ci.ts file.
Now, let's examine how the .gitlab-ci.yml file should be
structured for this project:
To transform this [Pytest][2] code into a valid .gitlab-ci.oy file, you need to:
Remove the import statement: from tests import conftest..
Place your pipeline code directly in the gitlab-ci.py, outside the def test(): function.
Instead of testing the rendered pipeline with conftest.check(pipeline.render()),
you should write the generated-pipeline.yml with pipeline.write_yaml().
The resulting .gitlab-ci.(ts|py) file, derived from the example, would look like the following:
The class JobCollection allows you to group jobs together to apply a
common configuration to all included jobs. This collection shares the same
configuration methods as demonstrated in the previous example
for individual jobs.
As evident from the output, jobs can have their own configurations
(indicated by job1.prependScripts([...])), and they can also inherit
common configurations from their collection (indicated by
collection.prependScripts([...])).
As evident from the output, jobs can have their own configurations
(indicated by job1.prepend_scripts([...])), and they can also inherit
common configurations from their collection (indicated by
collection.prepend_scripts([...])).
Pipelines are an expanded version of a JobCollection and include all of
its capabilities (in addition to pipeline-specific abilities).
This includes configuration options and the ability to stack
other collections within them.
import{Job,Pipeline}from"../../../src";exportfunctionjobFor(environment:string):Job{returnnewJob({stage:"do_something",scripts:[`./do-something-on.sh ${environment}`],});}test("test",()=>{constpipeline=newPipeline();for(constenvof["development","test"]){pipeline.addChildren({jobsOrJobCollections:[jobFor(env)]});}expect(()=>{pipeline.render();}).toThrowError(/Two jobs have the same name/);});
Error: Two jobs have the same name 'do-something' when rendering the pipeline
Please fix this by providing a different name and/or stage when adding those jobs to their collections/pipeline.
The error arises because both jobs were added with the same name to the
pipeline, causing the second job to overwrite the first one.
To avoid such conflicts, when adding jobs or collections to a collections,
you should use the .addChildren() method, which accepts the stage property.
You can utilize this property to modify the name of the jobs added.
The value of stage will be appended to the jobs' name and stage.
However, please note that this modification only applies to the jobs or
collections added at that moment and not to the jobs and collections already
present within the collection.
The error occurred because we added both jobs to the collection with different
stage values. By doing so, in the output, we correctly populate one job per
environment, ensuring that each job is appropriately associated with its
respective environment.
stages:-do_something_development-do_something_testdevelopment-do-something:stage:do_something_developmentscript:-./do-something-on.sh developmenttest-do-something:stage:do_something_testscript:-./do-something-on.sh test
Namespacing significantly enhances the reusability of collections.
You can encapsulate an entire GitLab CI pipeline within a collection and then
reuse that collection for each environment. By repeating the collection within
a loop for all environments, namespacing ensures that all jobs of the
collection are populated uniquely for each environment, enabling efficient
configuration management and deployment.
As evident from the previous examples, all jobs possess a distinct stage,
causing them to run within collections. This behavior occurs because the
stage property always extends the job's name and stage. This principle
applies universally to all stage properties, be it for the constructor of a
Job object or the .add_*() methods of a collection.
When adding jobs to a collection, whether directly or within another
collection, the objective is to merely extend the name of the jobs, leaving
their stage unchanged. This approach ensures that jobs with equal stages
can run in parallel.
To achieve this, you can set identical values for the stage property while
providing different values for the name property when creating jobs or
adding them to collections. By doing so, the name property will extend only
the name of a job without affecting its stage.
In this scenario, we have chosen an equal value for the stage parameter,
ensuring that both jobs have the same stage. To prevent their name values
from being identical (and risking the second job overwriting the first one),
we have also provided the name property. The name property's value will be
appended to the existing name of the jobs. Consequently, both jobs will run
in parallel within the same stage.
You might wonder why there is no dedicated stage property. When considering
collections, the stage property would extend both the name and stage of
a job, while the name property would only extend the name of a job.
Extending means appending values to the current name or stage values of a
job. However, there's no practical reason to solely extend the stage of a job
so that two jobs have distinct stages but unique names. In GitLab CI, a job
must have a unique name, so extending just the stage wouldn't serve any
purpose. Therefore, the consistent concept of using only the name and stage
properties applies to both jobs and collections.
As for not omitting the stage property when creating the jobs, it is because
of the explanation in the previous paragraph. When creating jobs, we cannot
directly set the stage value. Omitting the stage property means leaving it
unset, which would default the GitLab CI jobs to the test stage. To define a
stage other than test, we used the stage property. Yes, this implies that
the job's name will include the value of the stage. However, this design
decision clarifies the concept of name and stage more effectively than
providing a stage property for jobs, especially when collections lack such a
(superfluous) stage property.
No worries! Here's a simple guide to keep in mind when creating Jobs:
For distinct jobs that will run in separate stages within a collection,
set different values only for the stage property.
For distinct jobs that will run in parallel with equal stages, set different
values only for the name property.
For distinct jobs that will run in parallel with equal stages and a defined
stage name, set different values for the name properties but equal values for
the stage properties.
Setting different values for both properties is not advisable and will
result in the first scenario of distinct jobs running in separate stages
within a collection.
name parameter when adding jobs (and collections) to collections¶
Let's consider the collection example from the chapter
Stages allow reuse of jobs and collections.
Instead of using the stage parameter when adding the collection multiple
times to the pipeline, we will now utilize the name parameter.
You can also combine the usage of stage and name when adding jobs.
This approach is particularly useful when dealing with a large number of jobs,
where some groups of jobs should run sequentially while jobs within each group
should run in parallel. Here's an example to illustrate this scenario: