Gitolite runs trigger code at several different times. The features you
enable in the rc file determine what commands to run (or functions in perl
modules to call) at each trigger point. Example of trigger points are
INPUT
, PRE_GIT
, POST_COMPILE
, etc.; the full list is examined later in
this page.
!!! note ""
Quick tip: triggers are to gitolite what hooks are to git; we simply use a
different name to avoid constantly having to clarify which hooks we mean!
The other difference in gitolite is that each trigger runs multiple pieces
of code, not just one program with the same name as the hook, like git
does.
There are two types of trigger programs. Standalone scripts are placed in triggers or its subdirectories. Such scripts are quick and easy to write in any language of your choice.
Triggers written as perl modules are placed in lib/Gitolite/Triggers. Perl modules have to follow some conventions (see some of the shipped modules for ideas) but the advantage is that they can set environment variables and change the argument list of the gitolite-shell program that invokes them.
If you intend to write your own triggers, it's a good idea to examine a default install of gitolite, paying attention to:
It's easy to manually fire triggers from the server command line. For example:
gitolite trigger POST_COMPILE
However if the triggered code depends on arguments (see next section) this
won't work. (The POST_COMPILE
trigger programs all just happen to not
require any arguments, so it works).
Triggers receive the following arguments:
Any arguments mentioned in the rc file (for an example, see the renice command).
The name of the trigger as a string (example, "POST_COMPILE"
), so you
can call the same program from multiple triggers and it can know where it
was called from.
And finally, zero or more arguments specific to the trigger, as given in the next section.
Here are the rest of the arguments for each trigger, plus a brief description of when the trigger runs. (Note that when the repo name is passed in as an argument, it is without the '.git' suffix).
INPUT
runs before pretty much anything else. INPUT trigger scripts
must be in perl, since they manipulate the arguments and the environment
of the 'gitolite-shell' program itself. Most commonly they will
read/change @ARGV
, and/or $ENV{SSH_ORIGINAL_COMMAND}
.
There are certain conventions to adhere to; please see some of the shipped samples or ask me if you need help writing your own.
ACCESS_1
runs after the first access check. Extra arguments:
'result' is the return value of the access() function. If it contains the uppercase word "DENIED", the access was rejected. Otherwise it is the refex that caused the access to succeed.
!!! note ""
Note that if access is rejected, gitolite-shell will die as soon as it
returns from the trigger.
ACCESS_2
runs after the second access check, which is invoked by the
update hook to check the ref. Extra arguments:
ACCESS_2
also runs on each VREF that gets checked. In this case
the "ref" argument will start with "VREF/", and the last two arguments
won't be passed.
'result' is similar to ACCESS_1
, except that it is the update hook
which dies as soon as access is rejected for the ref or any of the VREFs.
Control then returns to git, and then to gitolite-shell, so the POST_GIT
trigger will run.
PRE_GIT
and POST_GIT
run just before and after the git command.
Extra arguments:
!!! note ""
Note that the `POST_GIT` trigger has no way of knowing if the push
succeeded, because 'git-shell' (or maybe 'git-receive-pack', I don't
know) exits cleanly even if the update hook died.
PRE_CREATE
and POST_CREATE
run just before and after a new repo is
created. In addition, any command that creates a repo (like 'fork') or
potentially changes permissions (like 'perms') may choose to run
POST_CREATE
.
Extra arguments for normal repo creation (i.e., by adding a "repo foo" line to the conf file):
Extra arguments for wild repo creation:
POST_COMPILE
runs after an admin push has successfully "compiled" the
config file. By default, the next thing is to update the ssh authkeys
file, then all the 'git-config's, gitweb access, and daemon access.
No extra arguments.
Note: for gitolite v3.3 or less, adding your own scripts to a trigger list was simply a matter of finding the trigger name in the rc file and adding an entry to it. Even for gitolite v3.4 or higher, if your rc file was created before v3.4, it will continue to work, and you can continue to add triggers to it the same way as before.
The rc file (from v3.4 on) does not have trigger lists; it has a simple list of "features" within a list called "ENABLE" in the rc file. Simply comment out or uncomment appropriate entries, and gitolite will internally create the trigger lists correctly.
This is fine for triggers that are shipped with gitolite, but does present a problem when you want to add your own.
Here's how to do that: Let's say you wrote yourself a trigger script called
'foo', to be invoked from the POST_CREATE
trigger list. To do that, just
add the following to the rc file, just before the ENABLE section:
POST_CREATE =>
[
'foo'
],
Since the ENABLE list pulls in the rest of the trigger entries, this will be effectively as if you had done this in a v3.3 rc file:
POST_CREATE =>
[
'foo',
'post-compile/update-git-configs',
'post-compile/update-gitweb-access-list',
'post-compile/update-git-daemon-access-list',
],
As you can see, the 'foo' gets added to the top of the list.
If your trigger is a perl module, as opposed to a standalone script or executable, the process is almost the same as above, except what you add to the rc file it looks like this:
POST_CREATE =>
[
'Foo::post_create'
],
Gitolite will add the Gitolite::Triggers::
prefix to the name given there.
The subroutine to be run (in this example, post_create
) is looked for in the
Gitolite::Triggers::Foo
package, so this requires that the perl module
start with a package header like this:
package Gitolite::Triggers::Foo;
You can use the 'gitolite query-rc' command to see what the trigger list actually looks like. For example:
gitolite query-rc POST_CREATE
If you have code that latches onto more than one trigger, collecting data
(such as for logging), then the outputs may be intermixed. You can record
the value of the environment variable GL_TID
to tie together related
entries.
The documentation on the log file format has more on this.
If you look at CpuTime.pm, you'll see that it's input()
function doesn't
set or change anything, but does set a package variable to record the
start time. Later, when the same module's post_git()
function is
invoked, it uses this variable to determine elapsed time.
(This is a very nice and simple example of how you can implement features by latching onto multiple events and sharing data to do something).
You can even change the reponame the user sees, behind his back. Alias.pm handles that.
Finally, as an exercise for the reader, consider how you would create a brand new env var that contains the comment field of the ssh pubkey that was used to gain access, using the information here.