Home

Skip to end of metadata
Go to start of metadata

Jenkins' real top page lives in http://jenkins-ci.org/ and link to three pages in the Wiki

News

Do you blog about Jenkins? Do you have any interesting URL to share with Jenkins community? Check out our News Aggregator.

New Wiki URL Requirement for Plugins

Let's say you're browsing the 'Available' tab in the Jenkins plugin manager for interesting-looking plugins. How do you learn more about them, preferably without installing them on your production instance? You click the plugin's name, which usually links to the plugin's wiki page, of course!

Unfortunately, it's possible for plugins to be published without a wiki page, or any other documentation aside from what's provided in the plugin itself. This is really unfortunate, as users rely on wiki pages and similar documentation to learn more about a plugin before installing or upgrading it, like its features, limitations, or recent changes. Additionally, plugin wiki pages have a special section at the top that provides an automatically generated technical overview of the plugin, such as dependencies to other plugins, the minimum compatible Jenkins version, a list of developers, and links to the source code repository and issue tracker component. Everyone learning about or using a plugin benefits from a plugin wiki page and luckily, almost all plugins have one!

To ensure that every plugin has at least a basic wiki page with some documentation, we decided to only publish plugins in the Jenkins update center that have and link to a wiki page. To keep the impact to a minimum, we're implementing this plan in several stages.

The first stage went live on June 1: All existing plugins that don't have a (valid) wiki link got a wiki link assigned by the update center (a so-called 'override'), either to an existing wiki page if there was one, or a generic "This plugin has no documentation" wiki page otherwise. This ensures that no currently existing plugins get dropped from the update center at this point. Of course, new plugins that don't provide a wiki URL and don't have an override URL will not show up at all.

The second stage will be enabled later this year: We're planning to remove all the overrides mentioned above. At this point, plugins may get removed from the update center if they still don't specify a wiki URL. Of course this isn't our goal, and we'll try to work with plugin authors to prevent this from happening.

So what can you do? Check the current overrides list to see whether the plugins you care about are affected, and if so, see the landing page in the wiki to learn what you can do. If you have any questions about this process not covered by the wiki, ask us on the Jenkins developers mailing list.

JUC Speaker Blog Series: Martin Hobson, JUC U.S. East

I’ve been using Jenkins for some time now as the build server for the various projects that are assigned to our four-person software development team, but recently I had exposure to how things were done in a much larger team, and I came away with a better understanding of the kinds of demands that are placed on a build pipeline in these environments. It was quite an education – while the CI pipelines that I administer in our small team might require a handful of virtual machines in our corporate cloud, the pipeline in this team supported over one hundred developers and required several hundred VM instances at any given time.

When operating at this scale, efficiency does become important, as the Amazon cloud charges add up and become significant at this level. Using some relatively simple techniques, I was able to gain insight into what actually happened in the more complex build jobs and learned just how these VM instances were utilized. These build jobs configured over a dozen virtual machines each, and understanding the startup and execution flows was critical to making changes and improving efficiencies. I will be discussing how to instrument and analyze these complex builds in my Lightning Talk: "Visualizing VM Provisioning with Jenkins and Google Charts” and hope to see you all there!

This post is by Martin Hobson, Senior Software Developer at Agilex Technologies. If you have your ticket to JUC U.S. East, you can attend his lightning talk "Visualizing VM Provisioning with Jenkins and Google Charts" on Day 1.

JUC IS HERE! JUC U.S. East will begin with registration at 7AM, Thursday June 18. The two day conference is sure to be a blast! If you have not registered, you can still get a ticket! Check out the agenda for JUC U.S. East here and find the link to register.



Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

JUC Speaker Blog Series: Stephan Hochdörfer, JUC Europe

I am very much looking forward to the Jenkins User Conference in London where I will present our insights on how to use Jenkins in a PHP related environment. Moving to Jenkins about 5 years ago bitExpert gained a lot of experience in running and managing a distributed Jenkins infrastructure. bitExpert builds custom applications for our clients which means that we have to deal with different project infrastructures, e.g. different PHP versions. We heavily rely on the build nodes concept of Jenkins which I will briefly outline in the session. Besides that I will give some in-depth insights on how we use Jenkins on a daily basis for the "traditional" CI related tasks (e.g. linting code, checking code style, running tests) as well as how Jenkins is used to power our integration tests. Last but not least I will cover how Jenkins acts as a kind of backbone for our Satis server which allows us to host the metadata of our company's private Composer packages. Throughout the talk I will point out which Jenkins plugins we use in the different contexts to give you a good starting point if you are new in the Jenkins ecosystem.

This post is by Stephan Hochdoerfer, Head of Technology at bitExpert AG. If you have your ticket to JUC Europe, you can attend his talk "Jenkins for PHP Projects" on Day 2.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.



Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

JUC Speaker Blog Series: Damien Coraboeuf, JUC Europe

Scaling and maintenance of thousands of Jenkins jobs

How to avoid creating of a jungle of jobs when dealing with thousands of them?

In our organisation, we have one framework, which is used to develop products. Those products are themselves used to develop end user projects. Maintenance and support are needed at each level of delivery and we use branches for this. This creates hundreds of combinations.

Now, for each product or project version (or branch), we have a delivery pipeline. We start by compiling, testing, packaging, publishing. Then we deploy the application on the different supported platforms and go through different levels of validation, until we’re ready for delivery. Aside from a few details and configuration elements, most of the pipelines are identical from one branch to the other, from one project to the other.

So, one framework, some products, several projects, maintenance branches, complex pipelines… We end up having many many jobs to create, duplicate and maintain. Before even going into this direction, we saw this as a blocking issue - there was no way we could maintain manually thousands of jobs on a day to day basis.

The solution we were looking for should have the following characteristics:

  • Self service - our goal being to delegate the job and branch administration in Jenkins to the projects, in order to reduce the support time
  • Security - we didn’t want to open Jenkins to the projects at configuration level - not acceptable in our context
  • Simplicity - the solution should be simple enough to be manageable by people not knowledgeable about the core technologies of Jenkins
  • Extensibility - the solution must be flexible enough to allow extensions when needed

When we thought about using the Job DSL plug-in, delegating the creation of the pipeline to the project teams was OK from a self service point of view, but was not secure and definitely not simple for people not knowing Jenkins.

In the end, we opted for a solution where:

    The Jenkins team develops, maintains and versions several pipeline libraries
  • A project team would edit a simple property file listing the characteristics of the current branch, like which type of platform is supported, which version of the pipeline library to use, etc.
  • Upon commit of this shopping list, the complete branch pipeline is regenerated using the given version of the pipeline library
  • The pipeline library code reads the “shopping list” property file and runs a Job DSL script to generate the branch pipeline according to those parameters

By default, the pipeline library generates a classic pipeline, suitable for most needs. It is also possible to define and use extensions, like having additional jobs in the pipelines.

In case of new features or defects, we develop or branch a new version of the pipeline library and projects or branches can use it by changing the version of their shopping list file.

A project gets injected into the system by having only a project seed being generated. From it, the authorised members can generate the branch seed and any branch pipeline at any time. Those seed jobs and the pipelines themselves can also be driven directly from the SCM using our plugin.

The project teams are now autonomous and can pilot their pipelines without requesting any support. They act in a secure and isolated way, and cannot compromise the shared environment. The “shopping list” file is simple and well documented. The system is not rigid and allows for extensions.

This platform has been developed initially for a very specific framework and a set of projects which depend on it, but has been extended since to be able to support other stacks. It is structured in two different parts:

  • The seed platform itself - generation of branch structures in Jenkins and trigger end points for being piloted from the SCM
  • The pipeline libraries, referenced from the shopping list files

We still allow some small tools and applications to define directly their pipeline by providing a Job DSL script.

Using the same principle, we can also pilot other tools in the ecosystem - like Artifactory or Ontrack.

I'll talk about this seed platform on June 24th, in the Jenkins User Conference in London.

This post is by Damien Coraboeuf, Continuous Delivery Expert at Clear2Pay. If you have your ticket to JUC Europe, you can attend his talk "Scaling of Jenkins Pipeline Creation and Maintenance" on Day 2.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

JUC Speaker Blog Series: Will Soula, JUC U.S. East

Chat Ops and Jenkins

I am very excited to be attending the Jenkins User Conference on the East Coast this year. This will be my third presentation at a JUC and fourth time to attend, but my first on the East Coast. I have learned about a lot of cool stuff in the past, which is why I started presenting, to tell people about the cool stuff we are doing at Drilling Info. One of the cooler things we have implemented in the last year is Chat Ops and our bot Sparky. It started as something neat to play with ("Oooo lots of kittens") but quickly turned into something more serious.

Ever get asked the same questions over and over? What jobs to run to deploy your code? What is the status of the build? These question and more can all be automated so you do not have to keep answering them. Furthermore, when you do get asked you can show them, and everyone else, how to get the information by issuing the proper commands in a chat room for everyone to see. With chat rooms functioning as the 21st century water coolers, putting the information in the middle of the conversation is a powerful teaching technique. You are not sending people to some out dated documentation on how to get their code deployed, nor are you showing them the steps today only to be forgotten tomorrow. Instead you can deploy your code and they see the exact steps needed to get their code deployed.

Even more impressive is the way ChatOps can bring your company together. Recently our CTO got a hipchat account so he could interact with Sparky. This gave me the idea that if we extend Sparky to deliver information useful to the other teams (Sales, Marketing, Finance, etc) then we would be able to get these wildly disparate teams in the same chat room together and hopefully they will talk and learn from each other. Where DevOps is the bringing together of Dev and Ops, ChatOps can be the bridge across the entire organization. Come see my presentation Day 1: Track 1 at 4:00 PM to learn how ChatOps can enrich your team, how Drilling Info is using it, and what our future plans entail for ChatOps.

This post is by Will Soula, Senior Configuration Management/Build Engineer at Drilling Info. If you have your ticket to JUC U.S. East, you can attend his talk "Chat Ops and Jenkins" on Day 1.

Still need your ticket to JUC? If you register with a friend you can get 2 tickets for the price of 1! Register here for a JUC near you.






Thank you to our sponsors for the 2015 Jenkins User Conference World Tour:

Releases

What's new in this Wiki?

Recently Updated

Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.

Add Comment