08. Coding Your Service - dewayneh/testing GitHub Wiki

08. Coding Your Service

Now that you've worked out your workflow, your delivery and storage technologies, and how you'll branch management and versioning, you can build a service! You're likely to build three general types of service:

    1. REST-based Services - services that expose HTTP/REST endpoints, typically with a JSON workload, for use as app-to-app APIs or as a destination for a modern HTML5 user interface.
    2. Asynchronous Event-Based Services - services that react to event received asynchronously, generally over a queue or topic-based broker like Apache Kafka.
    3. Batch/Scheduling Based Services - services that perform some action based on a pre-determined schedule.  These typically perform some type of bulk operation like moving data to an external system, doing some form of daily data reconcilation, etc.

Coding REST-based Services  

This section defines the expected development practices for creating and supporting RESTful services.  Different technologies (Java, Python, Node.js, etc) may have different implementations of the same concepts, but the concepts are critical.  In some cases, AT&T may have established standards for specific technologies (such as Java).  This page does not explain REpresentational State Transfer (REST), or how to code interfaces in specific technologies; Rather, it explains the required implementation to satisfy machine-generated documentation, usability, security, enhancements, and support for the services.

Standards

AT&T has established several standards that you must follow when building and deploying microServices. 

The standards that apply to a service’s API include:

Documentation

Use the “machine-generated documentation” approach to generate documentation on how to use various services and the operations they expose.  We don’t rely on manually-maintained documentation because it’s difficult and expensive to maintain, and not as reliable as the code itself. 

Machine-generated documentation can easily determine a lot from the code. 
It can identify the:

      • method/operation names
      • return values
      • argument types and order
      • exceptions thrown
      • data structures used for arguments and return types
      • structure of the service itself.

However, machine-generated documentation cannot supply many usability details, like:

      • what happens when a specific value is specified in an argument?
      • what causes the various exceptions that might come from an operation?
      • what does a null for an argument mean?

We can’t get these types of details from structural analysis of the code.  Instead, many implementation technologies allow for some form of meta-data or markup (like java annotations) to insert into the code.  In the case of JAX-RS, we can use annotations to supply information for code generation as well.  Different technologies may use a similar approach.

CDP uses the Swagger product to generate documentation from the source code.  Swagger relies on both determining the code structure, and on annotations or metadata inserted into the code.  Swagger can generate documentation about the service, its operations and the data types interchanged with them (both arguments and return values).  To assist with generating documentation, we can insert annotations into the method AND classes that represent the data types used as arguments and return values.  Different technologies have similar capabilities.

 

{width="1050"}

Swagger can utilize the existing JAX-RS annotations in a java-based service implementation.  It can augment these annotations using its own set to supply more information to the user. 
It also supports "indirect" arguments, like those you might find in a framework that packages the original arguments together into a containing object (such as Apache Camel). 
The example above shows the use of the "ApiImplicitParams" annotation to handle this case.  If using JAX-RS natively, use the ApiParam annotation instead.

{width="1050"}

Coding Asynchronous Event-Based Services

Eventual consistency and loose coupling in Domain-Driven Design apply through separate, asynchronous events.  Produced and consumed by microServices (or other application components), these events convey meaning about outcomes or conditions that might trigger more processing in other services.  They break the synchronous coupling between the action’s initiator that generated the event, and the event’s consumers downstream.  Isolated from the originator, the consumers operate on different timelines.

When using event-based processing, it’s important to isolate the event producers from the event consumers.  If the producers needed to know where to send events any change, extensions, or additional consumers to the interaction would require extensive code changes.  This would result in a many-to-many connectivity problem where all producers and consumers know each other, and it does not scale!

 

So, in order to gain isolation, they use an intermediate “broker.” All producers and consumers know this software component, but they don’t know each other.  Instead, a producer creates an event and presents it to the broker, then the broker distributes it to all interested consumers.  The broker manages the acceptance, persistence, and delivery of events to all services that want to receive them.     

This type of interchange has a formal messaging model called “Publish-Subscribe” (also called pub/sub).  The broker that manages it is called a “Topic.”  A service registers its interest in receiving events by “subscribing” to a topic.  All events published to that topic are sent to its subscribers.  A publisher can publish events into multiple topics, and a subscriber can subscribe to multiple topics.

All events presented to a specific topic are delivered to all of its subscribers.  If they need different event “streams” (different types of events to different subscribers), then they can use multiple topics.

Say for example you want to send notifications of orders placed and customers billed.  Not all subscribers would want both types of events, so you would create separate topics, segregating events by “type.”  Any subscribers who want to know when someone was billed would subscribe to the billing topic, and those interested in when a shipment sent would subscribe to the shipping topic.  Put simply, a publisher can publish into more than one topic, and a subscriber can subscribe to more than one topic; so you use different topics to segregate different types of events.

{.unresolved}

Events

The events themselves are data objects that should contain enough information for a subscriber to process it.  For example, if an event indicates a created payment, it should also include info about the account, product(s), customer, amount, etc. so that the downstream systems have enough context to understand.  We want to minimize the amount of interaction between microServices, so it’s counterproductive for consumers to contact the producer for information they could have sent in the event.

Topics

The publisher should distribute event mechanisms through topics, not queues (Queues use point-to-point messaging and don’t provide for multiple consumers).  Topics allow additional consumers to subscribe to them and receive events, allowing enhancement and extension of the enterprise system, without requiring code changes. 

For a new service to react to new payments, it just needs to subscribe to the topic.  This would not modify or impact the payment system in any way.

DMaaP

Producers and subscribers MUST use the DMaaP (Data Movement as a Platform) product to implement all topics and manage all event flows.  DMaaP, a sophisticated messaging system built on top of Kafka, provides data analysis, visibility, and control.  It supports “durable” subscribers, meaning events won’t get lost if multiple subscribers register with the topic and one isn’t “up” at the time.

DMaaP contains three significantly different capabilities.  The data router facilitates large data set movement, like file transfers, but isn’t useful in event processing.  The replication router replicates databases and data stores.  The DMaaP Message Router implements the publish/subscribe semantics needed for event processing. This feature should be used.

 

Refer to DMaaPfor more information.

Coding Batch/Scheduling Based Services  

 

Batch processing involves bulk data processing (such as large amounts of inserts, updates, deletions, extractions, triggering time-initiated operations, etc.).  These activities differ from the interactive workload, mostly due to the scale of the data and operations involved.  A batch process may often generate as much (or more) access to the data store and processing logic than all of the day’s interactive traffic.  Batch processes, triggered mostly by either an event or a time-of-day alarm (like Chron), usually have restricted time windows in which to operate.  Each domain’s batch processing requirements will vary widely.

Batch processes exist to perform several common functions:

    • extract data for processing elsewhere
    • update data to synchronize with an upstream or downstream system
    • perform data loads of new data
    • trigger additional processing based on time
    • perform archival and database cleanup

Batch processes often deal with diverse, homogeneous data sets.  They can extract data from multiple different tables or processes, and direct imported/updated data to multiple destinations as needed.  For example, an extract from another system may need to process multiple different transaction types, such as updates, inserts, or others that might trigger additional processing.  And while batch processes share many similarities, each one likely has differences in its exact nature.

When you refactor a monolithic application to microServices, many of these functions will often move into different ones.  If processed in the same way, the batch containing heterogeneous data would need to interact with multiple microServices, causing interaction between services that leads to high overhead and delays.  Instead, they should refactor and split up the batch so that any operations the microServices need to perform are isolated to each mS.  Using a “splitter” pre-process that extracts different processing types from the batch (and can create multiple homogeneous batch processes) would direct each stream to individual microServices for processing.

 

"Using a 'splitter' pre-process... would direct each stream to individual microServices for processing."
The following figure shows how to accomplish this:

Extracts can also accomplish the reverse of this. Multiple homogeneous extract streams can combine, creating a heterogeneous data stream for transmittal.

Time-initiated processing can happen using batch as well.  For example, someone could schedule a batch “job” at some time of day, and initiate some operation on a microService to process pending data. 
The processing should move into the microService, not stay maintained in the “job.”  The job only initiates the process.

Patterns

Batch processing has two different patterns: 8A and 8B.  Both strive to move all of the batch workload processing into the microService.  Additionally, the microService employs batch algorithms that should be written to process large batches of data, and optimized to operate on vectors (sets of data) and access the data store in a batch mode. Using interactive, scalar-based approaches to batch won’t get the high throughput required and will consume too many resources, generating large overhead.

MicroServices that interact with additional microServices during batch processing should be optimized to work with vector data (lists and sets) rather than scalar (single value) requests.  This also applies to other record systems that might perform batch processing.  In many cases, existing batch processing simply “hits the database” directly. This violates key mS principles when interacting between microServices and must not be allowed.  Batch processing within the microService that owns the data can access its own database directly, but not the databases of other microServices or applications.

Pattern 8A

In this pattern, a single microService performs all of the processing of a single batch process.  The batch initiation actually delegates the batch processing to the microService.  If in the past, a chron “job” executed a batch “job,” that “job” would now delegate processing to the microService.  All of the batch job’s original processing logic is now part of the microService.  

This pattern ingests the entire batch process into a single microService instance and requires some of the refactoring mentioned earlier.  Also, separate microService instances may be reserved and designated only for batch processing, eliminating the impact that interactive traffic would otherwise suffer if they included them

Pattern 8B

Pattern 8B, a more sophisticated pattern, allows for a "batch controller" to integrate into a microService.  This batch control function could implement scheduling, splitting, combining, and triggering functions.  It could also provide services like chunking where the batch could break up into blocks and distribute across multiple replicas.  This pattern requires more effort in design and development, but could be used for very complex batch processing requirements.

Authentication Integration - Camunda and AJSC

Authentication for the AJSC framework is performed using the CADI filter (servlet filter, deployed into the web server container). The CADI filter can be used as a standalone component if the user wishes to perform only authentication. It is also integrated within the AAF filter (also a servlet filter), as a pre-requisite to the authorization process. This wiki will describe details on how authentication is performed for your service using both the CADI and AAF filter.

      • CADI Filter
      • AAF Filter
      • Testing

CADI Filter

 To integrate the CADI filter in your AJSC app, follow the steps below.

**Step 1:
**

You need to add the following three dependencies to your project pom.xml.

**pom.xml
**

Please check for the latest non-SNAPSHOT version available of these three components at http://mavencentral.it.att.com:8084/nexus/.

 

 

<dependency>
    <groupId>com.att.cadi</groupId>
    <artifactId>cadi-aaf</artifactId>
    <version>1.x.x.x</version>
</dependency>
<dependency>
    <groupId>com.att.cadi</groupId>
    <artifactId>cadi-client</artifactId>
    <version>1.x.x.x</version>
</dependency>
<dependency>
    <groupId>com.att.cadi</groupId>
    <artifactId>cadi-core</artifactId>
   <version>1.x.x.x</version>
</dependency>

 

 

**Step 2: **

You need to place the cadi.properties file inside the project directory, where it can be loaded during execution (such as a resource file). A sample cadi.properties file is provided below.

cadi.properties   Expand source

Most of the properties above can be configured to your need, such as the log level and aaf_url depending on the aaf environment you want to target. There are five key properties inside cadi.properties-

      1. cadi_keyfile
      2. cadi_keystore
      3. cadi_keystore_password
      4. aaf_id
      5. aaf_password

cadi_keystore can be retrieved using the following wget command, and should go inside the project directory, just like the cadi.properties file.

 

 

wget:  http://aaf.dev.att.com:8096/truststore2016.jks

 

The aaf_id and aaf_password are the mechid and its encrypted password, used to pull down the user's permissions in the namespaces of which the mechid is an admin. The cadi_keyfile is the key used to encrypt the password.

To generate that key and to encrypt, you will need cadi-core jar. This should already be available in your local maven repository at com/att/cadi directory. Follow the commands below.

To generate the key called 'keyfile'-

 

 

java -jar <path>/cadi-core- 1 .x.x.x.jar keygen keyfile

 

To encrypt the password using that key-

 

 

java -jar cadi-core- 1 .x.x.x.jar digest <MechId Password> <Path of the keyfile>/keyfile

 

Once encrypted, put the encrypted password, with the phrase "enc:" preceding it , as the value for aaf_password property. For example, if the encrypted password is "abcdef", the value for aaf_password property would be "enc:abcdef". Place the keyfile inside the project directory, and specify its location as the value for cadi_keyfile property.

More information regarding all available configurable properties inside cadi.properties can be found at CADI Configuration Options.

Step 3:

After setting up the configuration for CADI filter, you need to register the CADI filter within your application. To do so, place the following three methods in any class with a @Configuration annotation.  The @Configuration annotation defines the class as a Spring configuration class, and the methods are used to define instantiation, configuration, and initialization logic for objects that the Spring IoC container will manage.

Note that you need to identify the location of cadi.properties file inside the propAccess() method above.

For more information regarding CADI filter, please visit its official documentation page.

 

 

@Bean
public PropAccess propAccess() {
     PropAccess access =  new PropAccess( new String[]{ "cadi_prop_files=<PATH>/cadi.properties" });
     //PropAccess access = new PropAccess(new String[]{"cadi_prop_files=etc/config/cadi.properties"});
     return access;
}
@Bean (name =  "cadiFilter" )
public Filter cadiFilter()  throws ServletException {
     return new CadiFilter( true ,propAccess());
}
@Bean
public FilterRegistrationBean cadiFilterRegistration()  throws ServletException {
     FilterRegistrationBean registration =  new FilterRegistrationBean();
     registration.setFilter(cadiFilter());
     registration.addUrlPatterns( "/*" );
     registration.setName( "cadiFilter" );
     registration.setOrder( 0 );
     return registration;
}

 

 

AAF Filter

As mentioned before, the AAF Filter integrates the CADI Filter to perform authentication before doing authorization. Therefore, using the AAF filter to do authentication is the same as using the CADI filter. However, you will need to configure authorization rules when using the AAF Filter, but not to use the CADI Filter. Look for more information about the AAF Filter on the AJSC wiki page.

To set up the AAF Filter for authentication, follow the steps below.

Step 1: Include the AAF Filter as part of project, either through ECO at project creation time, or by manually including the aaf-filter dependency in the project pom.xml.

 

Check the latest version on mavencentral.

pom.xml

 

<dependency>     
     <groupId>com.att.api.framework</groupId>     
     <artifactId>ajsc-aaf-filter</artifactId>     
     <version> 200 .x.x</version>
</dependency>

 

****Step 2: ****

Follow step 2 of CADI Filter setup from above.

**Step 3:  **

You will need to tell the AAF filter where to find the cadi.properties through the JVM argument with the name  cadi.properties.location . For example, if you have placed cadi.properties file in directory src/main/resources, you would pass the JVM argument as follows:

-Dcadi.properties.location=src/main/resources/cadi.properties

**Step 4: ** Set up Authorization rules in a property file. Instructions on setting up authorization rules for the AAF filter can be found on the Authorization Integration documentation page.

 

Testing

Now the service is all set to use CADI filter/AAF Filter for authentication.

    • To test through a rest client, you should pass in your aaf credentials ([email protected] and global login password) as Basic Auth Header. If authentication succeeds, you can access the requested resource. If not, the transaction will stop and you get a 401 error.
    • To test through a browser, if you're set up to use CSP Global logon and your host domain is AT&T-specific (att.com, sbc.com, etc.), it will redirect you to global logon if you haven't already logged in. If you have, it will authenticate you using the CSP cookie stored inside your browser.

{width="700"}

Still testing through the browser, if you access through a non-AT&T specific host (such as localhost), then a pop up screen will appear where you can enter your aaf credentials.

 

Authentication Integration - ANSC

In ANSC, AAF is enabled by default.  All of the required packages get generated along with the service.  All the required code base for AAF is bundled as npm packages.

      • @com.att.ajsc/enforcerjs
      • @com.att.ajsc/aaf
      • @com.att.ajsc/gatekeeperhapijs

The enforcerjs package includes the gatekeeperjs and aaf packages that do the actual authentication/authorization.

See a flow chart of the process on the right. (Click the image to view full-size.)

In ANSC, AAF is enabled by default on two of the sample routes (helloworld.js and login.js) that get generated along with the service.

 

When you scaffold your project for ANSC, you need to look for two files that use Basic Auth and CSP
By default these routes come enabled to use CSP or Basic Auth

In routes/helloworld.js

This route uses the enforce method from enforcerjs package, along with onPreAuth, one of the hapi.js lifecycle hooks.  It is called, regardless of authentication iperformance.

In routes/login.js

This route uses aafRunner function in the post route for login, required from

npm package @com.att.ajsc/aaf.

Disable CSP and use Basic Auth

To disable CSP, just comment out config.ext.onPreAuth, which uses the enforce function.

Commenting this out will no longer make use of the enforce method, which checks for the CSP cookies that set when you use global logon, and checks if you're using a registered domain name (example att.com).

How to Use Basic Auth With JWT Token

In ANSC, the JWT token strategy applies for basic authentication. 

To use it in config.auth, change 'false' to 'token' in strings. This will tell hapi to use the token strategy.

In order to use this, you will either:

      • Hit the /login route from the browser
      • POST to it with authorization header set in postman to Basic Auth and pass in your username and global logon password.

If authenticated, you'll get a token in the header that you can then pass as authorization and value of token.

This will only apply to routes that have 'token' set in config.auth in the routes. 
If config.auth is set to false for a given route, then that route will have one authentication check.

If you set 'token' for a given route and pass the token's authorization and value in the header during login, the token strategy will verify it. 
If valid you can access the route.

Authorization Integration - AJSC

The AAF servlet filter performs authentication for the AJSC framework using the CADI filter.  It accesses the user’s permissions in the namespaces for the mechid, specified in the cadi.properties.  This mechid must be a namespace administrator to perform authorization. Follow the steps below to set up authorization rules for your AJSC service using the AAF Filter.

Complete documentation on the AAF filter can be found on the AJSC site, at AAF Filter

Step 1: Perform Step 1 and Step 2 under Authentication Integration for AJSC.

Step 2: 

You will need to define a mapping of URLs and permissions for AAF to check in the file named AAFUserRoles.properties.

You can place the file anywhere within your project structure, but you have to define its location as a JVM argument with the name, “aaf.roles.properties.”

For example, if you placed the “AAFUserRoles.properties” file in etc/aaf directory as shown to the right, you would add the following argument in the list of JVM argumemts:

 

-Daaf.roles.properties=etc/aaf/AAFUserRoles.properties

 

 

{height="400"}

AAFUserRoles.properties

The file's contents will be one or more lines in the format described below:

${URL_PATTERN} (${METHODS})=${PERMISSION_1},${PERMISSION_2},...,${PERMISSION_N}

Each PERMISSION is in the format described below:

{TYPE}|{INSTANCE}|{ACTION}

Type, instance, and action components make up an AAF permission instance. See below for demonstration

Examples

 

/** (ALL)=com.att.ajsc.access|*|read

 

The above example means the following:

      • **/** **for URL means that AAF must do the following authorization check(s) for all requests.
      • Method ALL means AAF must check permissions on the right side of the equation for ALL HTTP Requests (GET, POST, PUT, DELETE) that match the first bullet point’s URL pattern.
        • The user must have com.att.ajsc.access|*|read permission

 

/** (GET,PUT)=com.att.ajsc.access|*|read

 

The above example means the following:

      • **/** **for URL means that AAF must do the following authorization check(s) for all requests.
      • Method GET,PUT means AAF must check permissions on the right side of the equation for **PUT and GET **HTTP Requests that match the URL pattern from the first bullet point
        • In other words, if a PUT request comes in for any URL, then it does NOT need AAF authorization.
      • The user must have com.att.ajsc.access|*|read permission

 

/** (POST)=com.att.ajsc.access|*|read,com.att.ajsc.mariadb.access|*|read

 

The above example means the following:

      • /** matches any URL, and results in the checking of all requests
      • Method POST means AAF must check the permissions on the equation’s right side for **POST **HTTP Requests that match the URL pattern from the first bullet point
      • The user must possess com.att.ajsc.access|*|read and **com.att.ajsc.mariadb.access|*|read **
        • In other words, if the user possesses only one of those permissions, the request will be denied.

 

 

/** (ALL)=com.att.ajsc.access|*|read
/service/** (PUT)=com.att.ajsc.mariadb.access|*|read

 

The above example means the following:

      • /** (ALL) as above, the URL wildcard matches all URLs and ALL matches any HTTP method, resulting in authorization check(s) for all requests and all methods
        • The user must have com.att.ajsc.access|*|read permission
      • /service/****  (PUT) **for URL means that all requests whose URL begins with */service/ *and whose http method is PUT must have the following authorization check(s).
        • The user must have com.att.mariadb.access|*|read permission

 

 

/service/hello (ALL)=com.att.ajsc.mariadb.access|*|read

 

The above example means the following:

      • /service/hello**  (ALL) **for URL means that AAF must do the following authorization check(s) for all requests whose URL is exactly /service/hello, and for ALL request methods that match that url pattern.
        • The user must have com.att.mariadb.access|*|read permission.

Authorization Integration - ANSC

Follow the steps below to set up authorization rules for your ANSC service.  Look for more information about AAF on the wiki page.

 

Step 1:   Perform Step 1 and Step 2 under Authentication Integration for ANSC.

Step 2: ** **You will need to define a mapping of URLs and permissions for AAF to check in the file named user.permission.js, located in your project's config folder.

Setting up permissions on AAF

If you want authorization rules set up for an app, you’ll need to create roles

and permissions on AAF.  Follow the steps below.

      1. On AAF, create one or more roles to associate different permissions and users. (Hint: use “role create” and “role user add” commands.)
      2. Create AAF permissions for your application's authorization rules as follows (Hint: use “perm create” command).
        1. “Type” of permission is the value you will add to config/user.permissions.js file. All permissions associated with one app must share the same "type" value.

        2. 'Instance' of permission is your application's URL/API pattern.

        3. “Action” of permission means the HTTP method of the request you’re allowing access for (GET, POST, PUT, etc)

        4. Grant the permissions you created to one or more roles.  (Hint: you can do this after step 2 with “perm grant” or as part of #2 by specifying roles to grant permissions to with “perm create.”

          1. Associate the AAF permission types with authorization endpoints by defining the “aafapp” properties inside nginx.properties

You can perform steps 1-4 using cli commands on the AAF dashboard command promp or via the AAF API.  For more info on cli commands, see this page .  For more info on AAF API, click on the “AAF API” tab on AAF dashboard home page.

 

 

To do the following steps, you must be a namespace admin.

 

Read the next section for an example of how to set permission rules on an example application called "entrycardemo."

Example

To create permissions and roles for the "entrycardemo," assume a namespace of "com.att.ajsc." 
The example will create four different permissions and associate them with two different roles.

 

The AAF Prod UI Url is https://aaf.it.att.com:8095.

 

Assume the example application has five API endpoints:

      • /rest/<servicename>/v1/hello
      • /rest/<servicename>/v1/user/:id
      • /rest/<sesrvicename>/v1/goodbye
      • /rest/<servicename>/v1/goodbye/customMessage
      • /rest/<servicename>/v1/welcomeBack

And two users – John Doe (jd6666) and Mary Jane (mj7777) – need permissions to access those APIs.

      • John Doe will also receive admin permissions and the ability to access all resources using all HTTP methods
      • Mary Jane will get limited permissions, allowing her to access resources for the"
        • "Hello" endpoint, using GET or PUT methods, 
        • Two "goodbye" methods, using POST
        • Root (" / ") endpoint, using all methods.

To set them up, follow steps 1-4 listed above, then...

Step 1 :

Create two roles, one to assign to John Doe and another for Mary Jane. Create and assign the role, "com.att.ajsc.entrycardemo.admin" to John Doe. Create and assign the role, "com.att.ajsc.entrycardemo.user" to Mary Jane. Use the following commands:

 

 

role create com.att.ajsc.ansc-admin
role user add com.att.ajsc.ansc-admin jd6666@csp.att.com
 
role create com.att.ajsc.ansc
role user add com.att.ajsc.ansc mj7777@csp.att.com

 

Step 2 :

Create permissions for John Doe and Mary Jane.

As mentioned, the "type" component, all of an application's permissions must be the same. Therefore, define the permissions' type as "com.att.ajsc.user."

John Doe can access every resource with every method. You can achieve that with a single permission rule, as follows:

      • The permission's "instance" component (the URL-pattern) must be a pattern that matches every incoming request URL; in other words a wildcard match such as " /* ."
      • The permission's "action" component should allow all HTTP methods. You can specify these in comma-separated list form (GET,PUT,POST,DELETE), or you can simply use the keyword "ALL," denoting access for all methods.

The command used to create that permission looks like this:

 

 

perm create com.att.ajsc.user-admin /rest/<servicename>/* ALL

 

Mary Jane can access resources for the 'hello' endpoint with GET or PUT method, and the two "goodbye" endpoints with POST method. For this, use two permission rules. Again, use the same 'type' component for these permissions as before, as they're still associated with the application.

 

 

 

perm create com.att.ajsc.user /rest/v1/hello GET,PUT
perm create com.att.ajsc.user /rest/v1/goodbye/* POST
perm create com.att.ajsc.user-admin /rest/v1/* ALL

 

The first command allows for access to /restservices/v1/hello endpoint with GET or PUT method

      • The second command allows for access to endpoints that begins **with **/rest/v1/goodbye (note the " * " in the command) with PUT method.
      • The second permission allows the user to access both /rest/v1/goodbye and /restservices/v1/goodbye/customMessage.
      • The third command allows for access to / endpoint with all methods.


Step 3 :

Next, grant the permissions created to the respective roles from Step 1.

Grant the first permission (com.att.ajsc.user|*|ALL) to com.att.ajsc.user-admin role.

 

 

perm grant com.att.ajsc.ansc-admin /rest/v1/* ALL com.att.ajsc.user-admin

 

 

Authorization Integration - Camunda

Prerequisites: 

Steps to enable Camunda Authorization:

  1. Enable Camunda aaf permissions by adding the camunda-aaf filter dependency to your service/pom.xml file

 

 

pom.xml

<!-- To include AAF-Camunda Filter in your application after initialization, simply include the following dependency in your pom.xml. -->
<dependency>     
    <groupId>com.att.api.framework</groupId>     
    <artifactId>ajsc-camunda-aaf-filter</artifactId>     
    <version>200.x.x</version> 
</dependency>

 

  1. Add ajsc-aaf-camunda.yaml configuration declaration in application.properties

 

opt/att/ajsc/config/application.properties

#configure ajsc-aaf-camunda.yaml
camunda.aaf.permission.yaml=etc/ajsc-aaf-camunda.yaml

 

 

  1. Provide your AAF Camunda configuration in ajsc-aaf-camunda.yaml as described in below format.

 

 

ajsc-aaf-camunda.yaml

# Group Definitions - define Camunda Groups with their Authorizations
groupDefinitions:
  # Example: define sales group
  #   Provide Task List user authorizations:
  #   https://docs.camunda.org/manual/latest/webapps/admin/authorization-management/#grant-permission-to-start-processes-from-tasklist
  - camGroup: {id: sales, name: 'example group for human task demo'}
    camAuthorizations:
      - {resourceName: Application, resourceId: tasklist, permissions: ACCESS}
      - {resourceName: ProcessDefinition, permissions: 'READ,CREATE_INSTANCE'}
      - {resourceName: ProcessInstance, permissions: CREATE}
 
# Group Associations - mapping AAF Permissions to Camunda Groups
groupAssociation:
  # Example: assign users to the Camumda system defined camunda-admin group when users have the AAF Permission.
  - aafPermission: {type: com.att.ajsc.camunda.admin}
    camGroup: {id: camunda-admin}
  # Example: assign users to a sales group when users have the AAF Permission.
  - aafPermission: {type: com.att.ajsc.camunda.sales}
    camGroup: {id: sales, name: 'example group for human task demo'}
 
# Global Authorizations - declare Camunda Authorizations for all users
authorizationGlobal:
  # Example: give all users access to the Camunda Admin app so they can check their own access
  camAuthorizations:
      - {resourceName: Application, resourceId: admin, permissions: ACCESS}
      - {resourceName: Application, resourceId: welcome, permissions: ACCESS}

AAF and Camunda Classes

Camunda enumerations:

 

Testing

Local Testing 

      1. Create a secret folder in your project workspace i.e. ${project_workspace}/opt/att/ajsc/secret
      2. Copy your keyfile and truststore2016.jks files to/opt/att/ajsc/secret folder as shown below  
        1. {width="277" height="163"}
      3. Alter the following properties inside etc/cadi.properties to specify the keyfile and trust store**
        **
        1. cadi_keyfile=/opt/att/ajsc/secret/keyfile
        2. cadi_keystore=/opt/att/ajsc/secret/ truststore2016 .jks
      4. Commit the files back to codecloud
      5. Build and run the application

KubernetesTesting

In order to test AAF on Kubernetes (K8S), you will need to upload the keyfile and truststore files as secrets on kubernetes. Use the following process to upload these files:

 

  1. Clone the _playbook repository created alongside your service repository at eco seed generation.
  2. Add the truststore and keyfiles inside the secrets/ folder in the playbook as shown in below
    {width="425" height="160"}
  3. Commit the files back to codecloud.
  4. After you've changed and committed the secret files to codecloud, you can build the app using Jenkins pipeline and test on kubernetes.

Note:

Modify the Kubernetes ConfigMap property to specify the location and name of the yaml file, such as

 

camunda.aaf.permission.yaml=/opt/att/ajsc/config/ajsc-aaf-camunda.yaml 

 

Example Yaml File

ajsc-aaf-camunda.yaml

Implementing Monitoring Hooks

Prometheus Integration

Monitoring gives us the ability to observe our system's state and address issues before they impact our business. Monitoring can also help to optimize our users' experience.

To analyze data, first you need to extract metrics from your system - like the Memory usage of a particular application instance.

Every service is different, and you can monitor many aspects of them. Metrics can range from low-level resources like memory usage to high-level business metrics like the number of signups. 

Prometheus in itself is a database. It can store four types of metrics (numbers):

      • Counter- metric that can only count up
      • Gauge- metric that can be set to any numeric value
      • Histogram- a series of numbers that can be represented as a histogram in configurable buckets
      • Summary- a series of numbers that can be represented as a collective unit 

A set of labels ties each metric together.  Each different combination of labels produces a separate time series.

Prometheus uses a pull mechanism, meaning data does't get pushed to Prometheus; it scrapes them from targets defined in its configuration file.

You can visualize Prometheus metrics on its expression browser, using its own query language, PromQL.  However, Grafana can generate visualizations with better graphics, and more collectively analyze different time series.

Integration with ANSC

This section includes steps for monitoring the ANSC service via Prometheus.

To collect metrics from our ANSC application and expose it to Prometheus, we used an NPM library called prom-client.

In the following example, we create a Counter type of metrics to get the number of times a route has been accessed. 

If the user wants to export any service-level metrics from a generated ANSC project, they can follow the 'example_route_served_count'  or 'helloworld_route_served_count' metrics, which they can find in the generated ANSC project as a reference.

The code below adds 'example_route_served_count'  metric to '/metrics' route.  You can see below code in src\server\routes\example.js.

{width="500"}

This code adds 'helloworld_route_served_count' metric to '/metrics' route.  You can see below code in src\server\routes\helloworld.js

You can now see the sample response by accessing the /metrics endpoint.

Prometheus recommends some default metrics that you can collect by calling collectDefaultMetrics().

You can get all metrics by running register.metrics(), which will output a string for prometheus to consume.

**Kubernetes integration **

Prometheus offers built-in Kubernetes integration capable of discovering Kubernetes resources like Nodes, Services, and Pods, while scraping metrics from them.

The Prometheus server needs to know about the endpoint and the server port where metrics are exported.

Generated ANSC service includes the following configuration in the deployment file.

{width="500"}

In ANSC, Prometheus is set to false by default**. ** Change it to true by editing the deployment.yaml on Kubernetes.

{width="500"}

Verify exported metric labels on Prometheus dashboard in the cluster

Sample service Deployed on K8s cluster https://zlp22248.vci.att.com/

 Prometheus end point for this cluster is: http://zlp22248.vci.att.com/prometheus

Verify your exported metric labels are available in the filter

Grafana

Create an application-specific Grafana dashboard to see application-exported metric graphs only. To create a Grafana dashboard, follow https://prometheus.io/docs/visualization/grafana/.

{width="500"}

We can see all other cluster-related metric graphs at the Kubernetes dashboard.

{width="500" height="390"}

Logging Standard Events

 

Making Configuration Portable and Localizable - AJSC and Camunda

ConfigMaps, used to manage dynamic configuration files and secret files, can modify and reload them without building the microService Docker image. 
It also helps to bind environment-specific properties into configuration files.

Prerequisite

• MicroService project is already generated using ECO Seed template
• Complete all ECO System Connections and Pipeline Configurations

Generated microService Project Structure

Every microService generated using ECO Seed template will generate three CodeCloud repositories under the selected CodeCloud Git repo.
(If the mS opted to use a different playbook, then playbook repo will not generate. The user will have to provide the playbook Git repo URL in pipeline.)

  1. mS Project repo
  2. mS_configrole repo
  3. mS_playbook repo

{.image-left width="300"}

 

 

 

 

_configrole Structure

The _configrole repo consists of deployment files (used by Ansible API for deployment process) and ConfigMap files (template files that can bind with playbook inventory values based on the environment). See the project structure to the right.

  1. templates/configs : Place the dynamic configuration files under this directory. Files under this directory, created as Kubernetes ConfigMaps (named <mS>-configs for Ex. AjscBundleconfigmapB-configs) are mounted as 'opt/att/ajsc/etc' volume in Kubernetes during deployment.  So the microService can access these files through this volume mount.
  2. templates/k8s : Kubernetes deployment and service yaml files go in this directory. These yaml files use the dynamic properties from JenkinsFile and playbook inventory.
  3. tasks/main.yaml : Used to delete and create ConfigMaps and secrets, this yaml file also deploys microServices in each host defined in the playbook inventory.
  4. defaults/main.yml: Define default values for the dynamic variables here.  The values defined in playbook inventory (if defined) will override this.

 

 

Note

 Based on the requirement, corresponding service can modify/customize these files accordingly

 

 

{width="300" height="322"}

_playbook Structure

The _playbook repo consists of inventory files based on environment and secrets. You can see the project structure in the image on the right.

  1. secrets : Place all security files like .jks files or keyfile under this directory. Files under this directory are created as Kubernetes secrets (with the name <mS> for Ex. AjscBundleConfigmapB ) and mounted as 'opt/att/ajsc/security' volume in Kubernetes during deployment.  So microService can access these files through this volume mount.
  2. inventory : This directory holds the dynamic binding properties based on the environment and Kubernetes cluster/hosts. Each environment (dev, test, prod) has a directory and has:
    1. hosts : In this file, we can specify the list of hosts where deployment has to happen.
    2. group_vars/all : This file holds the dynamic properties common to all hosts under the corresponding environment.
    3. host_vars : This directory contains one file per host defined in 'hosts' file, which contains the dynamic properties for that specific host.

{width="300"}

Deployment & Verification

  1. Configure the deployment pipeline in Eco, with PHASE as BUILD_DEPLOY or DEPLOY if you have already built your docker image.
    1. You can override Kube.namespace by providing ANS_NAMESPACE. (see image)
  2. Start a pipeline instance, check the logs on Jenkins server or wait for notification email.  
  3. Once the deployment succeeds, you can verify the ConfigMaps, secrets, deployment, pod and service are created in the corresponding Kubernetes cluster.

{height="250"}

References

GRM Edge Wiki :  https://wiki.web.att.com/display/cdp/GRMEdge

Ansible API / Configuration Management Process : Configuration Management Process

 

Making Configuration Portable and Localizable - ANSC

ConfigMaps, used to manage dynamic configuration files and secret files, can modify and reload them without building the microService Docker image.  It also helps to bind environment-specific properties into configuration files.

Prerequisite

• MicroService project is already generated using ECO Seed template
• Complete all ECO System Connections and Pipeline Configurations

Generated microService Project Structure

Every microService generated using ECO Seed template will generate three CodeCloud repositories under the selected CodeCloud Git repo. (If the mS opted to use a different playbook, then playbook repo will not generate. The user will have to provide the playbook Git repo URL in pipeline)

  1. mS Project repo
  2. mS_configrole repo
  3. mS_playbook repo 

{width="300"}

_configrole Structure

The _configrole repo consists of deployment files (used by Ansible API for deployment process) and ConfigMap files (template files that can bind with playbook inventory values based on the environment). 

  1. templates/configs : Place the dynamic configuration files under this directory. Files under this directory, created as Kubernetes ConfigMaps (named <mS>-configs for Ex. anscseedbundle-configs) are mounted as '/var/www/config-map' volume in Kubernetes during deployment.  So the microService can access these files through this volume mount.
  2. templates/k8s : Kubernetes deployment and service yaml files go in this directory. These yaml files use the dynamic properties from JenkinsFile and playbook inventory.
  3. tasks/main.yaml : Used to delete and create ConfigMaps and secrets, this yaml file also deploys microServices in each host defined in the playbook inventory.
  4. defaults/main.yml: Define default values for the dynamic variables here.  The values defined in playbook inventory (if defined) will override this.

{width="300"}

 

_playbook Structure

The _playbook repo consists of inventory files based on environment and secrets.

  1. secrets : Place all security files like .jks files or keyfile under this directory. Files under this directory are created as Kubernetes secrets (with the name <mS> for Ex. anscseedbundle) and mounted as '/var/www/secrets' volume in Kubernetes during deployment.  So microService can access these files through this volume mount.
  2. inventory : This directory holds the dynamic binding properties based on the environment and Kubernetes cluster/hosts. Each environment (dev, test, prod) has a directory and has:
    1. **hosts **: In this file, we can specify the list of hosts where deployment has to happen.
    2. group_vars/all : This file holds the dynamic properties common to all hosts under the corresponding environment.
    3. host_vars : This directory contains one file per host defined in 'hosts' file, which contains the dynamic properties for that specific host.

{width="400"}

 

Deployment & Verification

  1. Configure the deployment pipeline in Eco, with PHASE as BUILD_DEPLOY or DEPLOY if you have already built your docker image.
    1. You can override Kube.namespace by providing ANS_NAMESPACE. (see image)
  2. Start a pipeline instance, check the logs on Jenkins server or wait for notification email. 
  3. Once the deployment succeeds, you can verify the ConfigMaps, secrets, deployment, pod and service are created in the corresponding Kubernetes cluster

{height="250"}

 

References

GRM Edge Wiki :  https://wiki.web.att.com/display/cdp/GRMEdge          

Ansible API / Configuration Management Process : Configuration Management Process

Bad Practices 

Attachments:

{width="8" height="8"} image2018-1-17 11:11:10.png (image/png)
{width="8" height="8"} image2018-1-17 9:46:29.png (image/png)
{width="8" height="8"} event processing.png (image/png)
{width="8" height="8"} testing1.png (image/png)
{width="8" height="8"} Heterogeneous Batch Refactoring.png (image/png)
{width="8" height="8"} testing2.png (image/png)
{width="8" height="8"} code2flow_18cde.png (image/png)
{width="8" height="8"} enfocer.PNG (image/png)
{width="8" height="8"} token.PNG (image/png)
{width="8" height="8"} image2017-3-6 13:17:58.png (image/png)
{width="8" height="8"} image2017-3-6 13:24:56.png (image/png)
{width="8" height="8"} aafroutes.PNG (image/png)
{width="8" height="8"} aafroutes2.PNG (image/png)
{width="8" height="8"} userpermission.PNG (image/png)
{width="8" height="8"} image2018-1-5 12:45:53.png (image/png)
{width="8" height="8"} image2018-1-5 12:49:34.png (image/png)
{width="8" height="8"} exampleprom.PNG (image/png)
{width="8" height="8"} helloprom.PNG (image/png)
{width="8" height="8"} prom.PNG (image/png)
{width="8" height="8"} scrape.PNG (image/png)
{width="8" height="8"} k8sprom.PNG (image/png)
{width="8" height="8"} image2017-8-22 22_13_3.png (image/png)
{width="8" height="8"} image2017-8-23 9_11_44.png (image/png)
{width="8" height="8"} image2017-8-23 9_15_33.png (image/png)
{width="8" height="8"} 3configmaps.png (image/png)
{width="8" height="8"} image2017-9-1 15:30:34.png (image/png)
{width="8" height="8"} image2017-9-1 14:44:56.png (image/png)
{width="8" height="8"} image2017-11-16 10:16:44.png (image/png)
{width="8" height="8"} seedbundle.PNG (image/png)
{width="8" height="8"} configrole.PNG (image/png)
{width="8" height="8"} playbook.PNG (image/png)
{width="8" height="8"} image2018-1-24 14:27:4.png (image/png)

⚠️ **GitHub.com Fallback** ⚠️