Books Review - vidyasekaran/current_learning GitHub Wiki
Book1 : Pro Continuous Delivery: With Jenkins 2.0
What You'll Learn
Create a highly available, active/passive Jenkins server using CoreOS and Docker, and using Pacemaker and Corosync
Use a Jenkins multi-branch pipeline to automatically perform continuous integration whenever there is a new branch in your source control system NOTE: Multi-branch pipeline - When you create a new branch on the GitHub repo, Jenkins multibranch pipeline will automatically detect it and create a pipeline for it based on the Jenkinsfile. To see this in action, let’s create a new branch on Github.
Describe your continuous delivery pipeline with Jenkinsfile
Host Jenkins server on a cloud solution
Run Jenkins inside a container using Docker
Discover how the distributed nature of Git and the “merge before build” feature of Jenkins can be used to implement gated check-in
Implement a scalable build farm using Docker and Kubernetes
Who This Book Is For
You have experience implementing continuous integration and continuous delivery using Jenkins freestyle Jobs and wish to use the new Pipeline as a Code feature introduced in Jenkins 2.0
Your source code is on a Git-like version control system (Git, GitHub, GitLab, etc.) and you wish to leverage the advantages of a multi-branch pipeline in Jenkins
Your infrastructure is on a Unix-like platform and you wish to create a scalable, distributed build/test farm using Docker or Kubernetes You are in need of a highly available system for your Jenkins Server using open source tools and technologies
Setting Up a Kubernetes Cluster
There are various platforms on which we can set up Kubernetes. It can be Cloud, On-Premises VMs, and even Bare Metal. To keep this demonstration simple, I have chosen to set up Kubernetes on Vagrant. To Jenkins it does not matter where and how our Kubernetes Cluster runs.
What This Book Covers Chapter 1, “Elements of Continuous Delivery.” A short talk on Continuous Delivery and its elements, which are the following: importance of branching strategy, manageable and reproducible pipelines, scalable build/test infrastructure, fungible build/test environment, and more. All the forthcoming chapters (Chapters 2–8) are the practical implementation of the concepts discussed in this chapter. Chapter 2, “HA Jenkins Setup Using Pacemaker, Corosync, and DRBD.” A step-by-step guide to implement a highly available setup for Jenkins using Pacemaker, Corosync, and DRBD. Chapter 3, “HA Jenkins Setup Using CoreOS, Docker, and GlusterFS.” A step-by-step guide to implement a highly available setup for Jenkins using CoreOS, Docker and GlusterFS. Chapter 4, “Setting Up Jenkins on Docker and Cloud.” A step-by-step guide to install Jenkins on various platforms such as Linux (Fedora, Ubuntu), Docker, and Cloud (AWS). Chapter 5, “Pipeline As a Code.” The chapter is all about the Jenkins pipeline, Jenkins multibranch pipeline, Jenkinsfile, and Jenkins improved integration with GitHub. All this using a practical example that involves creating a CI (build-test) pipeline for a Maven project. Chapter 6, “Using Containers for Distributed Builds.” A step-by-step guide to creating a scalable build farm using Docker alone and using Kubernetes. Chapter 7, “Pre-Tested Commits Using Jenkins.” A short note on Pre-tested commits (Gated Check-in) along with a step-by-step guide to achieve it using the distributed nature of Git and the “Merge before build feature” of Jenkins. Chapter 8, “Continuous Delivery Using Jenkins Pipeline.” A step-by-step guide to creating a continuous delivery pipeline using Jenkins pipeline Job along with the required DevOps tool chain. All this using a practical example that involves creating a CD pipeline for a Maven project
Branching Strategy is also covered... Has details about Jmeter, Pipeline Code to Perform Static Code Analysis, Pipeline Code to Perform Integration Testing,Pipeline Code to Publish Build Artifacts to Artifactory
Book 2 : Jenkins 2 Up and Running Code Samples for book : https://resources.oreilly.com/examples/0636920064602
What Is Jenkins 2?
Jenkins 1.x versions for some time via plugins. (And, to be clear, Jenkins 2 gains much of its new functionality from major updates of existing plugins as well as entirely new plugins.) But Jenkins 2 represents more. It represents a shift to focusing on these features as the preferred, core way to interact with Jenkins. Instead of filling in web forms to define jobs for Jenkins, users can now write programs using the Jenkins DSL and Groovy to define their pipelines and do other tasks.
DSL here refers to Domain-Specific Language, the “programming language” for Jenkins. The DSL is Groovy-based and it includes terms and constructs that encapsulate Jenkins-specific functionality. An example is the node keyword that tells Jenkins that you will be programmatically selecting a node (formerly “master” or “slave”) that you want to execute this part of your program on.
JENKINS AND GROOVY Jenkins has included a Groovy engine for a long time. This was used to allow advanced scripting operations and to provide access/functionality not available through the web interface.
The DSL is a core piece of Jenkins 2. It serves as a building block that makes other key user-facing features possible. Let’s look at a few of these features to see how they differentiate Jenkins 2 from “legacy” Jenkins. We’ll quickly survey a new way to separate your code from Jenkins in a Jenkinsfile, a more structured approach to creating workflows with Declarative Pipelines, and an exciting new visual interface called Blue Ocean.
The Jenkinsfile In Jenkins 2, your pipeline definition can now be separate from Jenkins itself. In past versions of Jenkins, your job definitions were stored in configuration files in the Jenkins home directory. This meant they required Jenkins itself to be able to see, understand, and modify the definitions (unless you wanted to work with the XML directly, which was challenging). In Jenkins 2, you can write your pipeline definition as a DSL script within a text area in the web interface. However, you can also take the DSL code and save it externally as a text file with your source code. This allows you to manage your Jenkins jobs using a file containing code like any other source code, including tracking history, seeing differences, etc.
The filename that Jenkins 2 expects your job definitions/pipelines to be stored as is Jenkinsfile. You can have many Jenkinsfiles, each differentiated from the others by the project and branch it is stored with. You can have all of your code in the Jenkinsfile, or you can call out/pull in other external code via shared libraries. Also available are DSL statements that allow you to load external code into your script (more about these in Chapter 6).
The Jenkinsfile can also serve as a marker file, meaning that if Jenkins sees a Jenkinsfile as part of your project’s source code, it understands that this is a project/branch that Jenkins can run. It also understands implicitly which source control management (SCM) project and branch it needs to work with. It can then load and execute the code in the Jenkinsfile. If you are familiar with the build tool Gradle, this is similar to the idea of the build.gradle file used by that application. I’ll have more to say about Jenkinsfiles throughout the book.
Book 3: Production-Ready Microservices: Building Standardized Systems Across an Engineering Organization
Chapter 1, Microservices, is an introduction to microservices. It covers the basics of microservice architecture, covers some of the details of splitting a monolith into microservices, introduces the four layers of a microservice ecosystem, and concludes with a section devoted to illuminating some of the organizational challenges and trade-offs that come with adopting microservice architecture.
Chapter 2, Production-Readiness, the challenges of microservice standardization are presented, and the** eight production-readiness standards**, all driven by microservice availability, are introduced.
Chapter 3, Stability and Reliability, is all about the principles of building stable and reliable microservices. The development cycle, deployment pipeline, dealing with dependencies, routing and discovery, and stable and reliable deprecation and decommissioning of microservices are all covered here.
Chapter 4, Scalability and Performance, narrows in on the requirements for building scalable and performant microservices, including knowing the growth scales of microservices, using resources efficiently, being resource aware, capacity planning, dependency scaling, traffic management, task handling and processing, and scalable data storage.
Chapter 5, Fault Tolerance and Catastrophe-Preparedness, covers the principles of building fault-tolerant microservices that are prepared for any catastrophe, including common catastrophes and failure scenarios, strategies for failure detection and remediation, the ins and outs of resiliency testing, and ways to handle incidents and outages.
Chapter 6, Monitoring, is all about the nitty-gritty details of microservice monitoring and how to avoid the complexities of microservice monitoring through standardization. Logging, creating useful dashboards, and appropriately handling alerting are all covered in this chapter.
Chapter 7, Documentation and Understanding, which dives into appropriate microservice documentation and ways to increase architectural and operational understanding in development teams and throughout the organization, and also contains practical strategies for implementing production-readiness standards across an engineering organization.
There are two appendixes at the end of this book. Appendix A, Production-Readiness Checklist, is the checklist described at the end of Chapter 7, Documentation and Understanding, and is a concise summary of all the production-readiness standards that are scattered throughout the book, along with their corresponding requirements. Appendix B, Evaluate Your Microservice, is a collection of all the “Evaluate Your Microservice” questions found in the corresponding sections at the end of Chapters 3-7.
Appendix A. Production-Readiness Checklist
This will be a checklist to run over all microservices — manually or in an automated way.
A Production-Ready Service Is Stable and Reliable A Production-Ready Service Is Scalable and Performant A Production-Ready Service Is Fault Tolerant and Prepared for Any Catastrophe A Production-Ready Service Is Properly Monitored A Production-Ready Service Is Documented and Understood
**Appendix B. Evaluate Your Microservice
Stability and Reliability
**Chapters 3–7 **conclude with a short list of questions associated with the production-readiness standard discussed. The questions are organized by topic, and correspond to the sections within each chapter. All of the questions from each chapter have been collected here for easy reference.
The Development Cycle
Does the microservice have a central repository where all code is stored? Do developers work in a development environment that accurately reflects the state of production (e.g., that accurately reflects the real world)? Are there appropriate lint, unit, integration, and end-to-end tests in place for the microservice? Are there code review procedures and policies in place? Is the test, packaging, build, and release process automated?
The Deployment Pipeline
Does the microservice ecosystem have a standardized deployment pipeline? Is there a staging phase in the deployment pipeline that is either full or partial staging? What access does the staging environment have to production services? Is there a canary phase in the deployment pipeline? Do deployments run in the canary phase for a period of time that is long enough to catch any failures? Does the canary phase accurately host a random sample of production traffic? Are the microservice’s ports the same for canary and production? Are deployments to production done all at the same time, or incrementally rolled out? Is there a procedure in place for skipping the staging and canary phases in case of an emergency?
Dependencies What are this microservice’s dependencies? What are its clients? How does this microservice mitigate dependency failures? Are there backups, alternatives, fallbacks, or defensive caching for each dependency?
==============================================
book 4 : Microservices Deployment Cookbook
The goal of this book is to introduce you to some of the most popular and newest technologies and frameworks that will help you build and deploy microservices at scale. In this book, you will learn some of the most commonly used frameworks and technologies that help you build and deploy microservices at scale.
Throughout this book, we will be sticking to a specific application and will try to build upon that application. For example, we will be using the same application to configure service discovery, monitoring, streaming, log management, and load balancing. So by the end of this book, you will have a fully loaded microservice that demonstrates every aspect of a microservice.
This book covers several libraries and frameworks that help you build and deploy microservices. After reading this book, you will not be an expert on all of them, but you will know where to start and how to proceed. That’s the whole intention of this book. I hope you'll like it. Good luck microservicing!
What this book covers
Chapter 1, Building Microservices with Java, shows you how to build Javabased RESTful microservices using frameworks such as Spring Boot, Wildfly Swarm, Dropwizard, and Spark Java . This chapter will also show you how to write RESTful APIs using Spring MVC and Spark Java.
Chapter 2, Containerizing Microservices with Docker, shows you how to package your application using Maven plugins such as the Maven Shade plugin and Spring Boot Maven plugin. This chapter will also show you how to install Docker on your local computer. You will also learn how to containerize your application using Docker and later push your microservice’s Docker image tothe public Docker Hub.
Chapter 3, Deploying Microservices on Mesos, shows you how to orchestrate a Dockerized Mesos cluster with Marathon on your local machine. You will also learn how to deploy your Dockerized microservice to a Mesos cluster using Marathon. Later, you will learn how to scale your microservice; configure ports, volumes, and environment variables; and view container logs in Marathon. Finally you will learn how to use Marathon's REST API for managing your microservice.
Chapter 4, Deploying Microservices on Kubernetes, shows you how to orchestrate a Dockerized Kubernetes cluster using Minikube on your local machine. You will also learn how to deploy your Dockerized microservice to a Kubernetes cluster using the Kubernetes dashboard as well as kubectl. Later, you will learn how to scale your microservice; configure ports, volumes, and environment variables; and view container logs in Kubernetes using the dashboard as well as kubectl.
Chapter 5, Service Discovery and Load Balancing Microservices, shows you how you to run a Dockerized Zookeeper instance on your local machine. You will learn how to implement service discovery and load balancing using Zookeeper. This chapter also introduces you to Consul, where you will be running a Dockerized Consul instance on your local machine. Later, you will learn how to implement service discovery and load balancing using Consul and Spring Cloud. You will also learn how to implement service discovery and load balancing using Consul and Nginx.
Chapter 6, Monitoring Microservices, shows you how to configure Spring Boot Actuator and gives you an overview of all the metrics that are exposed by Spring Boot Actuator. You will also learn how to create your own metrics using the Dropwizard metrics library and later expose them via Spring Boot Actuator. Later, you will learn how to run a Dockerized Graphite instance on your local machine. The metrics that you created using Dropwizard will then be published to Graphite. Finally, you will learn how to run a Dockerized Grafana instance on your local machine and then use it to expose your metrics in the form of dashboards.
Chapter 7, Building Asynchronous Streaming Systems with Kafka and Spark, shows you how to set up and run a Dockerized Kafka broker on your local machine. You will learn how to create topics in Kafka and build Kafka Streams application in our microservice that will stream data asynchronously. You will build a similar Spark Streaming job that will have the ability to stream data asynchronously. You will get an overview of improving the performance of your streaming application. Later, you will learn how to aggregate your application logs into a Kafka topic and then explore the possibilities of integrating it with popular log-management systems.
Chapter 8, More Clustering Frameworks - DC/OS, Docker Swarm, and YARN, will give you an overview of other popular clustering frameworks in the market. You will get a high-level idea of Mesosphere’s DC/OS, Docker Swarm, and Apache YARN. You will also get to see how DC/OS and Docker Swarm can be used to deploy microservices on a larger scale
book 5 : OAuth2 in Action
This book is intended to be a comprehensive and thorough treatment of the OAuth 2.0 protocol and many of its surrounding technologies, including OpenID Connect and JOSE/JWT. We want you to come away from this book with a deep understanding of what OAuth can do, why it works the way that it does, and how to deploy it properly and securely in an unsafe internet. The target reader for this book is someone who’s probably used OAuth 2.0, or at least heard of it, but doesn’t really know how it works or why it works that way.
**Chapter 1 **provides an overview of the OAuth 2.0 protocol, as well as the motivation behind its development, including approaches to API security that predates OAuth.
Chapter 2 goes into depth on the authorization code grant type, the most common and canonical of OAuth 2.0’s core grant types.
Chapters 3 through 5 demonstrate how to build a simple but fully functional OAuth 2.0 client, protected resource server, and authorization server (respectively).
Chapter 6 looks at the variations in the OAuth 2.0 protocol, including grant types other than the authorization code, as well as considerations for native applications.
Chapters 7 through 9 discuss common vulnerabilities in OAuth 2.0 clients, protected resources, and authorization servers (respectively) and how to prevent them.
Chapter 10 discusses vulnerabilities and attacks against OAuth 2.0 bearer tokens and authorization codes and how to prevent them.
Chapter 11 looks at JSON Web Tokens (JWT) and the JOSE mechanisms used in encoding them, as well as token introspection and revocation to complete the token lifecycle.
**Chapter 12 **looks at dynamic client registration and how that affects the characteristics of an OAuth 2.0 ecosystem.
Chapter 13 looks at how OAuth 2.0 is not an authentication protocol, and then proceeds to show how to build an authentication protocol on top of it using OpenID Connect.
Chapter 14 looks at the User Managed Access (UMA) protocol built on top of OAuth 2.0 that allows for user-to-user sharing, as well as the HEART and iGov profiles of OAuth 2.0 and OpenID Connect and how these protocols are applied in specific industry verticals.
Chapter 15 moves beyond the common bearer token of OAuth 2.0’s core specifications and describes how both Proof of Possession (PoP) tokens and TLS token binding work with OAuth 2.0.
Chapter 16 wraps everything up and directs the reader to how to apply this knowledge going forward, including a discussion of libraries and the wider OAuth 2.0 community.
book 6 : Prometheus: Up & Running
This book describes in detail how to use the Prometheus monitoring system to moni‐tor, graph, and alert on the performance of your applications and infrastructure. This book is intended for application developers, system administrators, and everyone in between.
Expanding the Known
By performance I don’t only mean the response time of and CPU used by each request, but the broader meaning of performance.
a. How many requests to the data‐base are required for each customer order that is processed? b. Is it time to purchase higher throughput networking equipment? c. How many machines are your cache misses costing? d. Are enough of your users interacting with a complex feature in order to justify its continued existence?
=================
Book 7 : Microservice Architecture ALIGNING PRINCIPLES, PRACTICES, AND CULTURE
Who Should Read This Book You should read this book if you are interested in the architectural, organizational, and cultural changes that are needed to succeed with a microservice architecture. We primarily wrote this book for technology leaders and software architects who want to shift their organizations toward the microservices style of application development. You don’t have to be a CTO or enterprise architect to enjoy this book, but we’ve writ‐ ten our guidance under the assumption that you are able to influence the organiza‐ tional design, technology platform, and software architecture at your company
Microservice Architecture Reading List
Here’s a quick rundown of the chapters:
Chapter 1, The Microservices Way This chapter outlines the principles, practices, and culture that define microservice architecture.
Chapter 2, The Microservices Value Proposition This chapter examines the benefits of microservice architecture and some techniques to achieve them.
Chapter 3, Designing Microservice Systems This chapter explores the system aspects of microservices and illustrates a design process for microservice architecture.
Chapter 4, Establishing a Foundation This chapter discusses the core principles for microservice architecture, as well as the platform components and cultural elements needed to thrive.
Chapter 5, Service Design This chapter takes the “micro” design view, examining the fundamental design concepts for individual microservices.
Chapter 6, System Design and Operations This chapter takes the “macro” design view, analyzing the critical design areas for the software system made up of the collection of microservices.
Chapter 7, Adopting Microservices in Practice This chapter provides practical guidance on how to deal with common challenges organizations encounter as they introduce microservice architecture.
Chapter 8, Epilogue Finally, this chapter examines microservices and microservice architecture in atimeless context, and emphasizes the central theme of the book: adaptability to change.
Best Practices
These resources provide guidance on what to do—and what not to do—when it comes to implementing a microservice architecture:
• Alagarasan, Vijay. “Seven Microservices Anti-patterns”, August 24, 2015. • Cockcroft, Adrian. “State of the Art in Microservices”, December 4, 2014. • Fowler, Martin. “Microservice Prerequisites”, August 28, 2014. • Fowler, Martin. “Microservice Tradeoffs”, July 1, 2015. • Humble, Jez. “Four Principles of Low-Risk Software Release”, February 16, 2012. Humble, Jez, Chris Read, and Dan North. “The Deployment Production Line”. In Proceedings of the conference on AGILE 2006, 113–118. IEEE Computer Society. • Kniberg, Henrik, and Anders Ivarsson. “Scaling Agile at Spotify”, October 2012. • Vasters, Clemens. “Sagas”, September 1, 2012. • Wootton, Benjamin. “Microservices are Not a Free Lunch”, April 8, 2014.
Example Implementations - Refer in book
• Amazon Web Services • Autoscout24 • CA Technologies (Rally) • Disney • Gilt — http://www.infoq.com/presentations/microservices-dependencies — http://www.infoq.com/news/2015/04/scaling-microservices-gilt
Chapter 3 :
In this chapter we will lay the groundwork for thinking about your application in a way that helps you unlock the potential value of a microservices system. The concepts introduced are rooted in some pretty big domains: design, complexity, and systems thinking. But you don’t need to be an expert in any of those fields to be a good micro‐ service designer. Instead, we will highlight a model-driven way of thinking about your application that encapsulates the essential parts of complexity and systems thinking. Finally, at the end of this chapter we will introduce an example of a design process that can help promote a design-driven approach to microservices implemen‐ tation.
With that in mind, Figure 3-1 depicts a microservice design model comprised of five parts: Service, Solution, Process and Tools, Organization, and Culture
Service Implementing well-designed microservices and APIs are essential to a microservice system. In a microservice system, the services form the atomic building blocks from which the entire organism is built. If you can get the design, scope, and granularity of your service just right you’ll be able to induce complex behavior from a set of compo‐ nents that are deceptively simple. In Chapter 5 we’ll give you some guidance on designing effective microservices and APIs.
Solution A solution architecture is distinct from the individual service design elements because it represents a macro view of our solution. When designing a particular microservice your decisions are bounded by the need to produce a single output—the service itself. Conversely, when designing a solution architecture your decisions are bounded by the need to coordinate all the inputs and outputs of multiple services. This macro-level view of the system allows the designer to induce more desirable sys‐ tem behavior. For example, a solution architecture that provides discovery, safety, and routing features can reduce the complexity of individual services. We will dive into the patterns that you can employ to produce good microservice sys‐ tem behavior in Chapter 6.
Process and Tools Your microservice system is not just a byproduct of the service components that han‐ dle messages at runtime. The system behavior is also a result of the processes and tools that workers in the system use to do their job. In the microservice’s system, this usually includes tooling and processes related to software development, code deploy‐ ment, maintenance, and product management. Choosing the right processes and tools is an important factor in producing good microservice system behavior. For example, adopting standardized processes like DevOps and Agile or tools like Docker containers can increase the changeability of your system. In Chapters 4 and 6 we will take a closer look at the processes and tools that can have the biggest impact on a microservices system.
Organization How we work is often a product of who we work with and how we communicate. From a microservice system perspective, organizational design includes the structure, direction of authority, granularity, and composition of teams. Many of the companies that have had success with microservice architecture point to their organizational design as a key ingredient. But organizational design is incredibly context-sensitive and you may find yourself in a terrible situation if you try to model your 500+ employee enterprise structure after a 10-person startup (and vice versa).
A good microservice system designer understands the implications of changing these organizational properties and knows that good service design is a byproduct of good organizational design. We will dive deeper into team design concepts in Chapter 4.
Putting it Together: The Holistic System When put together all of these design elements form the microservices system. They are interconnected and a change to one element can have a meaningful and some‐ times unpredictable impact on other elements. The system changes over time and is unpredictable. It produces behavior that is greater than the behavior of its individual components. It adapts to changing contexts, environments, and stimuli. In short, the microservices system is complex and teasing desirable behaviors and outcomes from that system isn’t an easy task. But some organizations have had enor‐ mous success in doing so and we can learn from their examples.
A Microservices Design Process
Figure 3-2 illustrates a framework for a design process that you can use in your own microservice system designs. In practice, it is likely that you’ll need to customize the process to fit within your own unique constraints and context. You might end up using these design activities in a different order than given here. You may also decide that some activities aren’t applicable to your goals or that other steps need to be added.
Set Optimization Goals --> Develop Principles --> Sketch the System Design --> Implement, Observe, and Adjust
Set Optimization Goals The behavior of your microservice system is “correct” when it helps you achieve your goals. There isn’t a set of optimization goals that perfectly apply to all organizations, so one of your first tasks will be to identify the goals that make sense for your particu‐ lar situation. The choice you make here is important—every decision in the design process after this is a trade-off made in favor of the optimization goal.
For example, a financial information system might be optimized for reliability and security above all other factors. That doesn’t mean that changeability, usability, and other system qualities are unimportant—it simply means that the designers will always make decisions that favor security and reliability above all other things.
Development Principles
Underpinning a system optimization goal is a set of principles. Principles outline the general policies, constraints, and ideals that should be applied universally to the actors within the system to guide decision-making and behavior. The best designed principles are simply stated, easy to understand, and have a profound impact on the system they act upon.
Sketch the System Design
Implement, Observe, and Adjust
The perfect microservice system provides perfect information about all aspects of the system across all the domains of culture, organization, solution architecture, services, and process. Of course, this is unrealistic. It is more realistic to gain essential visibility into our system by identifying a few key measurements that give us the most valuable information about system behavior. In organizational design, this type of metric is known as a key performance indicator (KPI). The challenge for the microservice designer is to identify the right ones
===========================
Book 8 : NOSQL Web Development with Apache Cassandra (Deepak Vohra)
What This Book Covers
In Chapter 1, “Using Cassandra with Hector,” you learn how to use the Hector Java client to access Cassandra and create a CRUD (create, read, update, delete) application in the Eclipse IDE.
Chapter 2, “Querying Cassandra with CQL,” introduces the Cassandra Query Language (CQL), which is similar in syntax to SQL. It discusses the INSERT, SELECT, UPDATE,WHERE, BATCH, and DELETE clauses in CQL, with an example. Chapter 2 is based on
Chapter 3, “Using Cassandra with DataStax Java Driver,” discusses using CQL 3 with the Datastax Java client in the Eclipse IDE. In addition to CRUD, it discusses the support for running an Async query and a prepared statement query.
In Chapter 4, “Using Apache Cassandra with PHP In Chapter 5, “Using a Ruby Client with Cassandra Chapter 6, “Using Node.js with Cassandra Chapter 7, “Migrating MongoDB to Cassandrausing the Hector Java client. Chapter 8, “Migrating Couchbase to Cassandra Chapter 9, “Using Cassandra with Kundera,” introduces Kundera, a JPA 2.0–complaint object–data store mapping library for NoSQL data stores. In this chapter, you will create a JPA project, a JPA entity class, and a JPA client class in the Eclipse IDE. Then you will configure the object–data store mapping in the persistence.xml file. Finally, you will run CRUD operations on Cassandra using the JPA application. In Chapter 10, “Using Spring Data with Cassandra,” you will use Apache Cassandra with the Spring Data project. You will create a Maven project in the Eclipse IDE to run CRUD operations on Cassandra with Spring Data.
============================================
Book 9 : Usage-Driven Database Design - From Logical Data Modeling through Physical Schema Definition Taming NoSQL and relational schemas
This book is divided into five parts.
**Part I consists of a single chapter; **
Chapter 1,“Introduction to Usage-Driven Database Design,” introduces the four database design principles. Although these principles are geared toward database design, they are, in fact, a sound starting point for any system development activity. The chapter ends with the introduction of usage-driven database design, an end-to-end framework for developing a functioning database, starting with the logical data model and ending with a physical database schema.
**Part II focuses on logical data modeling. **
Chapter 2, “The E-R Approach,” introduces Peter Chen’s entity-relationship (E-R) approach, while
Chapter 3, “More About the E-R Approach,” focuses on more advanced logical data modeling topics.
Chapter 4, “Building the Logical Data Model,” uses the Usage-Driven Database Design: Logical Data Modeling phase as a template to tackle the real-world tasks of actually building a logical data model for an enterprise.
Chapter 5, “LDM Best Practices,” presents lessons learned from the database trenches.
Chapter 6, “LDM Pitfalls,” gives advice on what to avoid when data modeling.
Chapter 7, “LDM Perils to Watch For,” presents some logical data modeling cautionary tales.
In Part III, the logical data model becomes a functioning database schema.
Chapter 8, “Introduction to Physical Database Design,” presents a limited history of data management; however, the focus is gaining practical rather than historical insight. The concepts presented are used in later chapters for creating great databases.
Chapter 9, “Introduction to Physical Schema Definition,” introduces the four steps in the Usage-Driven Database Design: Physical Schema Definition phase that will turn the logical data model into a physical database schema.
Chapter 10, “Transformation: Creating the Physical Data Model,” converts the logical data model into a physical data model.
Chapter 11, “Utilization: Merging Data and Process,” modifies the physical data model to reflect exactly how an application will use the database. This is an important chapter because many database design approaches do not adequately take data usage into account.
Chapter 12, “Formalization: Creating a Schema,” converts the modified physical data model into a functioning physical database schema and subschemas.
Chapter 13, “Customization: Enhancing Performance,” addresses those situations where a simple database design cannot handle the load that will be placed on it. This step introduces performance-enhancing techniques (software, hardware, NoSQL, etc.) that can be applied to almost any situation to accommodate almost any performance requirements.
Chapter 14, “The Data Warehouse,” shows how U3D can be used to construct a data warehouse to support a decision support system.
Chapter 15, “The Big Data Decision Support System,” shows how U3D can be used with nontraditional data management products, such as Hadoop, to accommodate unstructured Big Data.
Part IV contains a single chapter, Chapter 16, “A Look Ahead,” which discusses where the DBMS community (teachers, vendors, and technical users) are or should be going. Part V contains five appendixes that include a glossary, data management object definitions, formulas, and a list of U3D deliverables.
This book is aggressively practical and generic. For example, it vigorously keeps logical data modeling logical, while holding off on physical issues until physical database design—not to justify some philosophical or theoretical construct but for the practical reason that it greatly increases the chances of developing a successful database design. It is DBMS generic or agnostic in that it does not tie the hands of the developer who is attempting to solve real-world information management problems. The right solution might involve a relational DBMS or it might require a NoSQL DBMS. Or, more likely, the DBMS choice was made some time ago, and now the database designer needs help in making the best of an imperfect DBMS situation. In summary, this book is for the undervalued data management professional who has to transform a combination of glossy DBMS vendor brochures and dry textbook commentary into a functioning fundamental part of the enterprise.
=============================
Book 9 : Expert Apache Cassandra Administration -- Install, configure, optimize, and secure Apache Cassandra databases
What this book covers
Chapter 1, Quick Start, is about getting excited and gaining instant gratification. If you have no prior experience with Cassandra, you leave this chapter with enough information to get yourself started on the next big project.
Chapter 2, Cassandra Architecture, covers design decisions and Cassandra's internal plumbing. If you have never worked with a distributed system, this chapter has a lot of useful distributed design concepts. This chapter will be helpful for the rest of the book when we look at patterns and infrastructure management. It will also help you to understand the discussion of the Cassandra mailing list and JIRA. It is a theoretical chapter; you may skip it and come back later.
Chapter 3, Design Patterns, discusses various design decisions and their pros and cons. You will learn about Cassandra limitations and capabilities. If you are planning to write a program that uses Cassandra, this is the chapter for you. Do not miss Chapter 9, Introduction to CQL 3 and Cassandra 1.2 for CQL.
Chapter 4, Deploying a Cluster, is a full chapter about how to deploy a cluster correctly. Once you have gone through the chapter, you will realize it is not really hard to deploy a cluster. It is probably one of the simplest distributed systems.
Chapter 5, Performance Tuning, explains how to get the most out of the hardware the cluster is deployed on. Usually you will not need to rotate lots of knobs and the default is just fine.
Chapter 6, Managing a Cluster, is about the daily DevOps drills. For example, scaling up a cluster, shrinking it down, replacing a dead node, and balancing the data load across the cluster.
Chapter 7, Monitoring, talks about the various tools that you can use to monitor Cassandra. If you already have a monitoring system, you would probably want to plug Cassandra health monitoring to it, or you may choose to use dedicated Cassandra monitoring tools.
Chapter 8, Integration, Shows how to integrate Cassandra with other tools. Cassandra is about large data sets, fast writes, and reads terabytes of data. What is the use of data if you can't analyze it? Cassandra can be smoothly integrated with various Hadoop projects, and integrating with tools such as Spark and Twitter Storm is just as easy. This chapter gives you an introduction to get you started with setting up Cassandra and Hadoop setup.
Chapter 9, Introduction to CQL 3 and Cassandra 1.2, fills the version gap. At the time of writing this book, Cassandra's latest version was 1.1.11. The complete book uses that version and the Thrift API to connect to Cassandra. Cassandra 1.2 was released later and Cassandra 2.0 is also expected to be released anytime now. CQL 3 is the preferred way to query Cassandra, and Cassandra 1.2 has some interesting upgrades.