Customizing Docker Images

Back in the day, when I was beginning to work on public-facing projects, setting up a development environment was really tedious. You have to install all the required software’s on the host machine. Relocating a project from one host to another sometimes comes to be the real work.

Now, the trend seems to have been changed. Once you want to work on a project, you start setting up a virtual machine on a remote computer that your company provides or a local virtual machine (at least in my company people prefer to work on virtual machines). There are many benefits, but one that I use all the time is the ability to take a virtual machine from one host and run it on a different one. Other than that, the ability to have multiple operating systems is very valuable for both development and testing.

In my company, we use the Windows systems for development purposes, but the client we are working for uses a Linux machine, which brought some unprecedented issues (since we are working on different platforms, every time we have to test for both environments). So as to align ourselves, we had to dedicate a full Linux server with Centos 7 and different application and web servers. While running some applications, we had a low response time and sometimes the complete death of some applications. Which finally led us to containerization.

There are many alternatives, even though we choose Docker. Docker is container technology with most public traction and is almost a de facto container standard right now. Docker containers wrap a piece of software in a complete file system that contains everything needed to run: code, system tools, system libraries and so on. This guarantees that the software will always run the same, regardless of its environment (host operating system).

In this post, I don’t go in details about how Docker works. I assume that you have it installed and have basic knowledge on how it works. I will try to cover how to edit and commit on a running image when we have to install/update configurations and/or software.

By default, once an image is created, it will stay as it is unless we commit new changes.

I start by downloading a Tomcat image from the official distribution (check on Dockerhub) and add a user to manage it so as to show how to do something very basic.

Search for the image using image search command: $docker search tomcat
11050584-screen-shot-2019-01-10-at-84236-pm

Pull the image you prefer ($docker pull tomcat). This pulls the latest available image.
11050593-screen-shot-2019-01-10-at-85832-pm

Once the download has finished, run the container instance with:
$docker run -d -p 8888:8080 tomcat

where,

  • –d tells Docker to run the container in a daemon mode (in the background so that we can use our terminal window without the need to open a new one)
  • –p tells what port on the docker image(8080) should be accessed from the host and on what port(8888) . Running this command will give us something similar to:

11050594-screen-shot-2019-01-10-at-91614-pm

Now we have Tomcat up and running on http://localhost:8888/ but we are still unable to login through the management-gui to manage the server. Let’s add a user to be able to do so.

docker exec -it {container} bash

You can change this if you need. Also, note the image-name and tag can be any name you like.
11050607-screen-shot-2019-01-10-at-94403-pm

Now you have a new image with everything from the previous one plus your setting. Note that if you exit the container before commit all the changes will be lost.

Try to run the new image and check if you can login the management-gui with the user created, use command:
docker run -d -p 8888:8080 tomcat:custom

Note: After adding a user to Tomcat, if 403 Access Denied is not resolved, try changing the following in webapps/manager/META-INF/context.xml and commit again (you can use the same image-name:tag to overwrite the previous) :

<Context privileged=”true”>
<!– <Valve className=”org.apache.catalina.valves.RemoteAddrValve” allow=”127.0.0.1″/> –>
</Context>

Intro to Jenkins Pipelines and Publishing Over SSH

In many projects, things that seem very small come to be decisive factors to continue on the current path or find a better one. Starting from simple text editors to tools used for a long period of time, we all have different flavors for each tool in hand. Merging these ideas sometimes comes to be a to-do, and while this happens for any kind of work done in a group, there are also some other factors that shape the path to it.

This time, we came across an issue which let us think about how to proceed. Our project is being developed as an integration of many standalone microservices. This led us to use different resource files for our remote development and production environments. After considering different options, we finally decided to deploy these files through SSH (from our build server to where the application server is). Since we are using Jenkins for CI/CD, we had to use an ssh plugin.

In this post, I would like to address how we used Publish over SSH to achieve what we needed.

I assume that you are already familiar with how to use SSH and configure it on Jenkins.

The plugin has the following features:

SCP – Send files over SSH (SFTP)
Execute commands on a remote server (can be disabled for a server configuration, or for the whole plugin)
Use username and password (keyboard-interactive) or public key authentication
Passwords/passphrases are encrypted in the configuration files and in the UI
SSH SFTP/ SSH Exec can be used as a build step during the build process (Freestyle and Matrix projects)
SSH before a (maven) project build, or to run after a build whether the build was successful or not (see Build wrappers below)
The plugin is “promotion aware” (send files directly from the artifacts directory of the build that is being promoted) see Promotions
Optionally override the authentication credentials for each server in the job configuration (or provide them if they have not been provided for that server in the global configuration)
Optionally retry if the transfer of files fails (useful for flakey connections)
Enable the command/ script to be executed in a pseudo TTY(since some commands are not allowed to run without TTY sessions)

If you would like to proceed with the GUI version of this plugin, you can see how to configure it here.

Our first step will be to define what stages to use:

node {

stage('Getting ready...') {

git branch: "dev", credentialsId: 'your_id_on_jenkins', url: 'https://github.com/amenski/jenkins_pipeline.git'

}

Building stage:

stage('Build'){

script{

try {

if(isUnix()){

sh 'mvn clean package'

}else{

bat 'mvn clean package'

}

} catch(err) {

throw err

}
}
}

Let’s assume that till this point everything has gone right.

stage('SSH transfer') {

script {

sshPublisher(

continueOnError: false, failOnError: true,

publishers: [

sshPublisherDesc(

configName: “${env.SSH_CONFIG_NAME}”,

verbose: true,

transfers: [

sshTransfer(

sourceFiles: “${path_to_file}/${file_name}, ${path_to_file}/${file_name}”,

removePrefix: “${path_to_file}”,

remoteDirectory: “${remote_dir_path}”,

execCommand: “run commands after copy?”

)

])

])

}

}

In this block, we have the sshPublisher keyword, which holds different values corresponding to their respective keywords. By default, the plugin makes the build “UNSTABLE” if there are errors. You can change this behavior by adding FailOnError: true, which tells it to fail the job if there are any problems. The publishers array holds details about where to send what and some additional tuning parameters, including whether there is a command we need to issue after the files have been published to the server.

In the event that it is important to have more transfers or commands to issue, the sshTransferblock can be repeatedly used as:

transfers:[

sshTransfer(

execCommand: “Run commands before copy?”

),

sshTransfer(

sourceFiles:”${path_to_file}/${file_name}, ${path_to_file}/${file_name}”,

removePrefix: “${path_to_file}”,

remoteDirectory: “${remote_dir_path}”,

execCommand: “run commands after copy?”

)

])
Deploy:

stage('Deploy'){

if(currentBuild.currentResult == “SUCCESS” || currentBuild.currentResult == “UNSTABLE”){

if (isUnix()) {

sh ‘echo “Build Succeded.”‘

}else{

bat ‘echo “Build Succeded.”‘

}

}

}

I personally recommend that you use placeholders in every configuration related fields. It is much easier for future changes. To accomplish this, you can work with environment variables or parameters like in the above case and access them with “${env.VAR_NAME}” or “${params.VAR_NAME}”.

25 Reasons to use Ubuntu Linux instead of Windows

Technomania

Lots of people argue why I should use Linux (any distro, I prefer Ubuntu) when I am currently using Windows or Mac. Given a chance to select your choice of operating system (OS), people may prefer to use Linux than anything else. The other thing which prevents users using Linux is the hardware support which is not that good currently for Linux. But that is because we have been living in a world where Windows is being forced to everyone when they buy PC. That’s why hardware manufacturers are also forced to support Windows first than any other thing. But this support has increased tremendously over last couple of years. At the same time Linux community has developed itself in making Linux (Ubuntu in particular) more user friendly.
Anyways that is an ongoing debate whether you should use Linux or Windows or Mac. And at some point it is a…

View original post 678 more words

How to repair grub

Last time installing windows stopped my ubuntu from booting-up. After some trial and error I managed fixing the boot-loader with some commands. I thought it might be worth sharing for those who may face the same problem.
First of all, you need a live cd or bootable usb stick of any flavor of linux distribution. For example, Ubuntu, Mint or fedora can be used.
Boot to the live cd or usb that you have and open a terminal window(Ctrl + T) and type the following:

  1. sudo fdisk -l

The output will be like what’s on the following image.

grub 1

We are going to install our grub on the highlighted partition.

  1. sudo blkid : This is used to make sure you have selected the right device and its output will be(nowadays ext4 is the default file system used).
    grub 2

So now you are sure about where to install grub.

By default grub is installed in /boot/grub directory when you install Ubuntu or other linux distros. So you have to mount /dev/sda3 to a directory our preference. To do this, type the following:

sudo mount /dev/sda3 /mnt :- Where /mnt is any directory you need.

NOTE:- If you have a separate boot partition, skip the above command and mount the boot partition at /mnt/boot (sudo mount /dev/sdX /mnt/boot).

  1. sudo grub-install – – boot-directory=/mnt/boot /dev/sda :- Make sure that the last argument(/dev/sda) must be the device(hard disk), not some particular partition like /dev/sda3, that you need to install the boot loader on.
  1. Reboot your system and it should boot properly.

 

Basics of Linux iptables

iptables is a user-space application program that allows to configure the tables provided by the Linux kernel firewall and the chains and rules it stores(Wikipedia). Therefore, it is used to set up, maintain, and inspect the tables of IPv4 packet filter rules in the Linux kernel.

Each table contains a number of chains. These chains in turn contain a list of rules which can match a set of packets and each rule specifies what to do with a packet that matches.

When installing Linux flavor operating system, iptables exists inside the kernel. But, by default it allows all in-coming and out-going. Therefore, it is not recommended to use iptables with its default configuration.
iptables-l-v

The following are possible values for IPTABLES policies:

  1. ACCEPT:- Allow the connection..
  2. DROP:- Drop the connection, act like it never happened. This is best if you don’t want the source to realize your system exists.
  3. REJECT:- Don’t allow the connection, but send back an error. This is best if you don’t want a particular source to connect to your system, but you want them to know that your firewall blocked them.

Tables

Currently there are five independent tables (which tables are present at any time depends on the kernel configuration options and which modules are present). For the time being we will be looking at the basics of the mostly used tables (FILTER and NAT). Continue reading

Get Ubuntu kernel information

The Linux kernel is a core component of every Ubuntu system. It is primarily responsible for managing the system hardware. Having the ability to check what version you are running can be beneficial. From time to time there are vulnerabilities that affect specific kernel versions, and being able to verify whether or not you are affected can be helpful. Certain features and hardware support are tied to certain kernel versions as well. This post will outline how to discover your Ubuntu kernel version in two easy steps.

Discovering your Ubuntu kernel version can be done in just a few quick steps. I’ll outline these steps below.

  1. Open a Terminal window (Alt + Crtl + T)

    Ubuntu Terminal
    Ubuntu Terminal
  2. Type the following command:
man 1 uname
This displays documentation about uname command. There you can see different options including:
 -a, -s, -n, -r, -v, -m, -p, -i, -o. This options have their own advantages, for example,
-a means display all information about your system. 

  Now, type uname -r

That is all, you will an output like: 3.13.0-32-generic

How to disable Ubuntus global menu bar

Ubuntu 14.04 has recently been released and they now include a setting for enabling the local menus, allowing you to easily move the menu bar for each program to that program’s window rather than displaying the menu bar at the top of the screen.

To enable the local menus on ubuntu 14.04, click the System Settings icon on the Unity bar.

System Settings

From the System Settings box, click the Appearance icon in the Personal section.

Appearance

From this window, select the “Behaviour” tab

behaviour

Then, under the “Show the menus for a window” section there is a radio button saying “In the window’s title bar”, select it.

Now you are all set, you will get the menus on the title bar of each window opened.

final

Now that you have the menus on the windows you open. Enjoy using Ubuntu.

5.8. Invoking the C Preprocessor

Most often when you use the C preprocessor you will not have to invoke it explicitly: the C compiler will do so automatically. However, the preprocessor is sometimes useful on its own.

The C preprocessor expects two file names as arguments, infile and outfile. The preprocessor reads infile together with any other files it specifies with `#include’. All the output generated by the combined input files is written in outfile.

Either infile or outfile may be `-‘, which as infile means to read from standard input and as outfile means to write to standard output. Also, if outfile or both file names are omitted, the standard output and standard input are used for the omitted file names.

Here is a table of command options accepted by the C preprocessor. These options can also be given when compiling a C program; they are passed along automatically to the preprocessor when it is invoked by the compiler.

`-P’

Inhibit generation of `#’-lines with line-number information in the output from the preprocessor . This might be useful when running the preprocessor on something that is not C code and will be sent to a program which might be confused by the `#’-lines.

`-C’

Do not discard comments: pass them through to the output file. Comments appearing in arguments of a macro call will be copied to the output before the expansion of the macro call.

`-traditional’

Try to imitate the behavior of old-fashioned C, as opposed to ANSI C.

  • Traditional macro expansion pays no attention to singlequote or doublequote characters; macro argument symbols are replaced by the argument values even when they appear within apparent string or character constants.
  • Traditionally, it is permissible for a macro expansion to end in the middle of a string or character constant. The constant continues into the text surrounding the macro call.
  • However, traditionally the end of the line terminates a string or character constant, with no error.
  • In traditional C, a comment is equivalent to no text at all. (In ANSI C, a comment counts as whitespace.)
  • Traditional C does not have the concept of a “preprocessing number”. It considers`1.0e+4′ to be three tokens: `1.0e’, `+’, and `4′.
  • A macro is not suppressed within its own definition, in traditional C. Thus, any macro that is used recursively inevitably causes an error.
  • The character`#’ has no special meaning within a macro definition in traditional C.
  • In traditional C, the text at the end of a macro expansion can run together with the text after the macro call, to produce a single token. (This is impossible in ANSI C.)
  • Traditionally,`\’ inside a macro argument suppresses the syntactic significance of the following character.

`-trigraphs’

Process ANSI standard trigraph sequences. These are three-character sequences, all starting with `??’, that are defined by ANSI C to stand for single characters. For example, `??/’ stands for `\’, so `’??/n”is a character constant for a newline. Strictly speaking, the GNU C preprocessor does not support all programs in ANSI Standard C unless `-trigraphs’ is used, but if you ever notice the difference it will be with relief. You don’t want to know any more about trigraphs.

`-pedantic’

Issue warnings required by the ANSI C standard in certain cases such as when text other than a comment follows `#else’ or `#endif’.

`-pedantic-errors’

Like `-pedantic’, except that errors are produced rather than warnings.

`-Wtrigraphs’

Warn if any trigraphs are encountered (assuming they are enabled).

`-Wcomment’

Warn whenever a comment-start sequence `/*’ appears in a `/*’ comment, or whenever a Backslash-Newline appears in a `//’ comment.

`-Wall’ Continue reading

5.7. The `#error’ and `#warning’ Directives

The directive `#error’ causes the preprocessor to report a fatal error. The rest of the line that follows `#error’ is used as the error message. You would use `#error’ inside of a conditional that detects a combination of parameters which you know the program does not properly support.
For example, if you know that the program will not run properly on a Vax, you might write
#ifdef __vax__
#error Won't work on Vaxen.
#endif

If you have several configuration parameters that must be set up by the installation in a consistent way, you can use conditionals to detect an inconsistency and report it with `#error’.

For example,
#if HASH_TABLE_SIZE % 2 == 0 || HASH_TABLE_SIZE % 3 == 0 \
|| HASH_TABLE_SIZE % 5 == 0
#error HASH_TABLE_SIZE should not be divisible by a small prime
#endif 

The directive `#warning’ is like the directive `#error’, but causes the preprocessor to issue a warning and continue preprocessing. The rest of the line that follows `#warning’ is used as the warning message. You might use `#warning’ in obsolete header files, with a message directing the user to the header file which should be used instead.

5.6. Conditional Syntax

A conditional in the C preprocessor begins with a conditional directive: `#if’, `#ifdef’ or `#ifndef’.

5.6.1. If

The `#if’ directive allows you to test the value of an arithmetic expression, rather than the mere existence of one macro. Its syntax is
#if expression

Your code here….

#endif /* expression */

expression is a C expression of integer type, subject to stringent restrictions. It may contain

  • Integer constants.
  • Character constants, which are interpreted as they would be in normal code.
  • Arithmetic operators for addition, subtraction, multiplication, division, bitwise operations, shifts, comparisons, and logical operations (&& and ||). The latter two obey the usual short-circuiting rules of standard C.
  • All macros in the expression are expanded before actual computation of the expression’s value begins.
  • Uses of the definedoperator, which lets you check whether macros are defined in the middle of an `#if’.
  • Identifiers that are not macros, are all treated as zero(!). This allows you to write #if MACRO instead of #ifdef MACRO, if you know that MACRO, when defined, will always have a nonzero value. Function-like macros used without their function call parentheses are also treated as zero.

Note that `sizeof’ operators and enum-type values are not allowed. enum-type values, like all other identifiers that are not taken as macro calls and expanded, are treated as zero.

The controlled text inside of a conditional can include preprocessing directives. Then the directives inside the conditional are obeyed only if that branch of the conditional succeeds. The text can also contain other conditional groups. However, the `#if’ and `#endif’ directives must balance. If expression is not correctly formed, GCC issues an error and treats the conditional as having failed.

5.6.2. Else

The `#else’ directive can be added to a conditional to provide alternative text to be used if the condition fails. This is what it looks like:
#if expression
text-if-true
#else /* Not expression */
text-if-false
#endif /* Not expression */

If expression is nonzero, the text-if-true is included and the text-if-false is skipped. If expression is zero, the opposite happens.

You can use `#else’ with `#ifdef’ and `#ifndef’, too.

5.6.3.  Elif

One common case of nested conditionals is used to check for more than two possible alternatives. For example, you might have

#if X == 1
...
#else /* X != 1 */
#if X == 2
...
#else /* X != 2 */
...
#endif /* X != 2 */
#endif /* X != 1 */

Another conditional directive, `#elif’, allows this to be abbreviated as follows:
#if X == 1
...
#elif X == 2
...
#else /* X != 2 and X != 1*/
...
#endif /* X != 2 and X != 1*/

`#elif’ stands for “else if”. Like `#else’, it goes in the middle of a conditional group and subdivides it; it does not require a matching `#endif’ of its own. Like `#if’, the `#elif’ directive includes an expression to be tested. The text following the `#elif’ is processed only if the original `#if’-condition failed and the `#elif’ condition succeeds.

More than one `#elif’ can go in the same conditional group. Then the text after each `#elif’ is processed only if the `#elif’ condition succeeds after the original `#if’ and all previous `#elif’ directives within it have failed.

`#else’ is allowed after any number of `#elif’ directives, but `#elif’ may not follow `#else’.

5.6.4. Ifdef

The simplest sort of conditional is

#ifdef MACRO controlled text #endif /* MACRO */
This block is called a conditional group. controlled text will be included in the output of the preprocessor if and only if MACRO is defined. We say that the conditional succeeds if MACRO is defined, fails if it is not.

The controlled text inside of a conditional can include preprocessing directives. They are executed only if the conditional succeeds. You can nest conditional groups inside other conditional groups, but they must be completely nested. In other words, `#endif’ always matches the nearest `#ifdef’ (or `#ifndef’, or `#if’). Also, you cannot start a conditional group in one file and end it in another.

Even if a conditional fails, the controlled text inside it is still run through initial transformations and tokenization. Therefore, it must all be lexically valid C. Normally the only way this matters is that all comments and string literals inside a failing conditional group must still be properly ended.

The comment following the `#endif’ is not required, but it is a good practice if there is a lot of controlled text, because it helps people match the `#endif’ to the corresponding `#ifdef’. Older programs sometimes put MACRO directly after the `#endif’ without enclosing it in a comment. This is invalid code according to the C standard.

Sometimes you wish to use some code if a macro is not defined. You can do this by writing `#ifndef’ instead of `#ifdef’. One common use of `#ifndef’ is to include code only the first time a header file is included(once-only include).

 

5.6.5. Keeping Deleted Code for Future Reference

If you replace or delete a part of the program but want to keep the old code around for future reference, you often cannot simply comment it out. Block comments do not nest(/* /* */ */ is not possible), so the first comment inside the old code will end the commenting-out. The probable result is a flood of syntax errors.

One way to avoid this problem is to use an always-false conditional instead. For instance, put #if 0 before the code to be deleted and #endif after it. This works even if the code being turned off contains conditionals, but they must be entire conditionals (balanced `#if’ and `#endif’).

  • Do not use #if 0for comments which are not C code. Use a real comment, instead.
  • The interior of #if 0must consist of complete tokens; in particular, single-quote characters must balance.
  • Comments often contain unbalanced single-quote characters (known in English as apostrophes). These confuse #if 0. They don’t confuse `/*’.

 

Example: Let us see a program that dispalays a memory usage of our computer on both linux and windows computers. Here we will see how to use ifdef, define

#ifdef _WIN32 /* If we are on windows, this flag is defined by compilers designed for windows computers */
#include <windows.h>
#include <stdio.h>
#include <tchar.h>

#define WIDTH 7
#endif /* End _WIN32*/

#ifdef linux /* If we are on linux flavour PCs, defined by GCC */
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

#endif /* End linux*/

int main(int argc, char *argv[]) 
{
    #ifdef _WIN32
    MEMORYSTATUSEX statex;
    statex.dwLength = sizeof (statex);
    GlobalMemoryStatusEx (&statex);

    _tprintf (TEXT("There is  %*ld %% of memory in use.\n"),WIDTH, statex.dwMemoryLoad);
#endif         /* End _WIN32*/

#ifdef linux
    char cmd[30];
    int flag = 0;   
    FILE *fp;
    char line[130];     
    int TotalMem, TotalFree, TotalUsed;
    flag=0;
    memcpy (cmd,"",30);
    sprintf(cmd,"free -t -m|grep Total");          
    fp = popen(cmd, "r");       
    while ( fgets( line, sizeof line, fp))
    {   
        flag++;
    	sscanf(line,"%*s %d %d %d",&TotalMem, &TotalUsed, &TotalFree);
    }
pclose(fp); 
    if(flag){
    	printf("TotalMem: -- TotalUsed: -- TotalFree:\n");
    	printf("%d\t\t %d \t\t %d\n",TotalMem,TotalUsed,TotalFree);
     }
else 
        printf("not found\n");
#endif                /* End linux*/

    return 0;
}