<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Rakesh Vardan]]></title><description><![CDATA[Rakesh Vardan]]></description><link>https://blog.rakeshvardan.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 11:05:05 GMT</lastBuildDate><atom:link href="https://blog.rakeshvardan.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Choosing the Right ORM & Backend Stack: A Journey Through Spring Boot, NestJS, and .NET]]></title><description><![CDATA[Introduction: My Backend Journey
Every backend developer eventually faces the same question: which stack is best for my project?
With a background that spans both automation engineering and full-stack development, I've had the opportunity to work wit...]]></description><link>https://blog.rakeshvardan.com/choosing-the-right-orm-and-backend-stack-a-journey-through-spring-boot-nestjs-and-net</link><guid isPermaLink="true">https://blog.rakeshvardan.com/choosing-the-right-orm-and-backend-stack-a-journey-through-spring-boot-nestjs-and-net</guid><category><![CDATA[Backend Development]]></category><category><![CDATA[Databases]]></category><category><![CDATA[orm]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Thu, 28 Aug 2025 15:56:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/QMCmngPGjQk/upload/caa6cc6659f8dac9dda21450a4fcdeef.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction-my-backend-journey"><strong>Introduction: My Backend Journey</strong></h2>
<p>Every backend developer eventually faces the same question: which stack is best for my project?</p>
<p>With a background that spans both automation engineering and full-stack development, I've had the opportunity to work with various backend technologies. After spending considerable time in the Java ecosystem with Spring Boot, I became curious about other approaches to backend development. This curiosity led me to explore NestJS with Prisma in the JavaScript/TypeScript world and .NET Web API with Entity Framework Core in the C# realm.</p>
<p>What stood out was how each stack reflects its ecosystem’s philosophy: Spring Boot’s enterprise reliability, NestJS’s developer experience, and .NET’s deep integration with Microsoft tools. In this article, I'll share what I've learned about these three powerful approaches to building backend applications and interacting with databases.</p>
<p>Modern backend development is rich with frameworks, libraries, and tools that help engineers interact with databases, enforce best practices, and deliver production-ready applications:</p>
<ol>
<li><p><strong>Spring Boot + JPA + Flyway (Java ecosystem)</strong></p>
</li>
<li><p><strong>NestJS + Prisma (Node.js ecosystem)</strong></p>
</li>
<li><p><strong>.NET Web API + EF Core (C# ecosystem)</strong></p>
</li>
</ol>
<p>Each stack has its own philosophy, learning curve, and sweet spots. Whether you're a seasoned developer looking to expand your toolkit or a team lead evaluating technology choices for your next project, I hope my experience helps you navigate these options more confidently.</p>
<hr />
<h2 id="heading-1-spring-boot-jpa-flyway-java"><strong>1. Spring Boot + JPA + Flyway (Java)</strong></h2>
<h3 id="heading-overview"><strong>Overview</strong></h3>
<p>Spring Boot has been my go-to framework for years, especially for enterprise-grade applications. This battle-tested Java framework, when combined with <strong>JPA (Java Persistence API)</strong> for ORM and <strong>Flyway</strong> for database migrations, creates a robust foundation for applications that need to stand the test of time.</p>
<blockquote>
<p><em>From my experience:</em> Spring Boot shines when you're building systems that will evolve over many years with multiple teams contributing. The structure it enforces pays dividends as applications grow in complexity.</p>
</blockquote>
<h3 id="heading-features"><strong>Features</strong></h3>
<ul>
<li><p><strong>JPA/Hibernate ORM</strong>: Abstracts SQL operations into Java objects</p>
</li>
<li><p><strong>Flyway</strong>: Manages database schema evolution with version control</p>
</li>
<li><p><strong>Spring Boot autoconfiguration</strong>: Dramatically reduces boilerplate code</p>
</li>
</ul>
<h3 id="heading-example-basic-user-entity"><strong>Example: Basic User Entity</strong></h3>
<p>Here's how a simple user entity looks in the Spring Boot world:</p>
<pre><code class="lang-java"><span class="hljs-meta">@Entity</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">User</span> </span>{
    <span class="hljs-meta">@Id</span>
    <span class="hljs-meta">@GeneratedValue</span>
    <span class="hljs-keyword">private</span> Long id;

    <span class="hljs-keyword">private</span> String username;
    <span class="hljs-keyword">private</span> String email;
}
</code></pre>
<p>The repository interface makes database operations straightforward:</p>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">UserRepository</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">JpaRepository</span>&lt;<span class="hljs-title">User</span>, <span class="hljs-title">Long</span>&gt; </span>{
    <span class="hljs-function">Optional&lt;User&gt; <span class="hljs-title">findByUsername</span><span class="hljs-params">(String username)</span></span>;
}
</code></pre>
<p>And database migrations with Flyway are just SQL scripts with version numbers:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- V1__create_user_table.sql</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">users</span> (
  <span class="hljs-keyword">id</span> BIGSERIAL PRIMARY <span class="hljs-keyword">KEY</span>,
  username <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">50</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  email <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">100</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>
);
</code></pre>
<h3 id="heading-use-cases"><strong>Use Cases</strong></h3>
<ul>
<li><p>Enterprise banking, telecom, healthcare apps</p>
</li>
<li><p>Large teams with structured requirements</p>
</li>
<li><p>Complex domain-driven design implementations</p>
</li>
<li><p>Systems requiring long-term maintenance</p>
</li>
</ul>
<h3 id="heading-strengths"><strong>Strengths</strong></h3>
<ul>
<li><p>Rock-solid mature ecosystem with proven production reliability</p>
</li>
<li><p>Excellent tooling from Spring Data to Spring Security</p>
</li>
<li><p>Comprehensive documentation and massive community</p>
</li>
<li><p>Strong integration with virtually any relational database</p>
</li>
</ul>
<h3 id="heading-challenges-ive-faced"><strong>Challenges I've Faced</strong></h3>
<ul>
<li><p>The learning curve can be steep for newcomers to Java</p>
</li>
<li><p>JPA/Hibernate sometimes generates surprising SQL queries that require tuning</p>
</li>
<li><p>Configuration can get verbose for complex scenarios</p>
</li>
<li><p>The development feedback loop is slower than with some newer stacks</p>
</li>
</ul>
<hr />
<h2 id="heading-2-nestjs-prisma-nodejs"><strong>2. NestJS + Prisma (Node.js)</strong></h2>
<h3 id="heading-overview-1"><strong>Overview</strong></h3>
<p>When I first tried <strong>NestJS</strong> after years in the Java world, I was pleasantly surprised by its familiar structure (inspired by Angular) combined with the speed of the Node.js ecosystem. Paired with <strong>Prisma</strong> as the ORM, it creates a delightful developer experience with strong TypeScript type safety.</p>
<blockquote>
<p><em>From my experience:</em> NestJS + Prisma delivers incredible developer productivity while maintaining good architectural practices. The real magic is in Prisma's schema-driven approach and type generation.</p>
</blockquote>
<h3 id="heading-features-1"><strong>Features</strong></h3>
<ul>
<li><p><strong>Prisma schema</strong>: Acts as the single source of truth for your database schema and migrations</p>
</li>
<li><p><strong>Type-safe queries</strong>: Eliminates most runtime SQL errors with compile-time checking</p>
</li>
<li><p><strong>NestJS modules &amp; decorators</strong>: Create a well-organized codebase with clear boundaries</p>
</li>
</ul>
<h3 id="heading-example-user-model"><strong>Example: User Model</strong></h3>
<p>The Prisma schema is clean and intuitive:</p>
<pre><code class="lang-plaintext">model User {
  id        Int      @id @default(autoincrement())
  username  String   @unique
  email     String
  createdAt DateTime @default(now())
}
</code></pre>
<p>Services are concise and fully typed:</p>
<pre><code class="lang-typescript"><span class="hljs-meta">@Injectable</span>()
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> UserService {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">private</span> prisma: PrismaService</span>) {}

  <span class="hljs-keyword">async</span> findByUsername(username: <span class="hljs-built_in">string</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.prisma.user.findUnique({
      where: { username },
    });
  }
}
</code></pre>
<p>And migrations are as simple as:</p>
<pre><code class="lang-bash">npx prisma migrate dev --name init
</code></pre>
<h3 id="heading-use-cases-1"><strong>Use Cases</strong></h3>
<ul>
<li><p>Startups and fast prototypes</p>
</li>
<li><p>Real-time applications (chat, dashboards)</p>
</li>
<li><p>Teams with JavaScript/TypeScript expertise</p>
</li>
<li><p>Microservices and API-first applications</p>
</li>
</ul>
<h3 id="heading-strengths-1"><strong>Strengths</strong></h3>
<ul>
<li><p>Lightning-fast development experience</p>
</li>
<li><p>Excellent TypeScript integration with autocompletion for database queries</p>
</li>
<li><p>Great for teams transitioning from frontend to full-stack</p>
</li>
<li><p>Modern async/await patterns feel natural and clean</p>
</li>
</ul>
<h3 id="heading-challenges"><strong>Challenges</strong></h3>
<ul>
<li><p>Ecosystem, while growing rapidly, isn't as mature as Java's</p>
</li>
<li><p>Prisma has some limitations with complex queries and specialized database features</p>
</li>
<li><p>Production deployment requires more careful consideration (Node.js runtime behaviors)</p>
</li>
<li><p>Less battle-tested for extremely high-scale enterprise applications</p>
</li>
</ul>
<hr />
<h2 id="heading-3-net-web-api-ef-core-c"><strong>3. .NET Web API + EF Core (C#)</strong></h2>
<h3 id="heading-overview-2"><strong>Overview</strong></h3>
<p>Coming to <strong>.NET Web API</strong> after working with both Spring Boot and NestJS was an interesting experience. Microsoft's framework for building RESTful APIs, combined with <strong>Entity Framework Core</strong> as its ORM, feels like a middle ground - offering Java's robustness with some of TypeScript's developer experience benefits.</p>
<blockquote>
<p><em>From my experience:</em> .NET Core feels like it takes the best ideas from both worlds - Java's structure and TypeScript's expressiveness - while adding some uniquely powerful features like LINQ.</p>
</blockquote>
<h3 id="heading-features-2"><strong>Features</strong></h3>
<ul>
<li><p><strong>EF Core migrations</strong>: Built-in database schema evolution with intuitive tooling</p>
</li>
<li><p><strong>LINQ queries</strong>: Strongly-typed, composable query expressions that are a joy to write</p>
</li>
<li><p><strong>ASP.NET Core DI &amp; middleware</strong>: Flexible request pipeline configuration</p>
</li>
</ul>
<h3 id="heading-example-user-entity"><strong>Example: User Entity</strong></h3>
<p>The C# entity is concise and readable:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">User</span> {
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> Id { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Username { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Email { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
}
</code></pre>
<p>The DbContext defines your database structure:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">AppDbContext</span> : <span class="hljs-title">DbContext</span> {
    <span class="hljs-keyword">public</span> DbSet&lt;User&gt; Users { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}
</code></pre>
<p>Migrations are generated and applied through the CLI:</p>
<pre><code class="lang-bash">dotnet ef migrations add InitialCreate
dotnet ef database update
</code></pre>
<p>Controllers are clean and well-structured:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">ApiController</span>]
[<span class="hljs-meta">Route(<span class="hljs-meta-string">"api/[controller]"</span>)</span>]
<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">UserController</span> : <span class="hljs-title">ControllerBase</span> {
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> AppDbContext _context;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">UserController</span>(<span class="hljs-params">AppDbContext context</span>)</span> {
        _context = context;
    }

    [<span class="hljs-meta">HttpGet(<span class="hljs-meta-string">"{username}"</span>)</span>]
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ActionResult&lt;User&gt;&gt; GetUser(<span class="hljs-keyword">string</span> username) {
        <span class="hljs-keyword">var</span> user = <span class="hljs-keyword">await</span> _context.Users.FirstOrDefaultAsync(u =&gt; u.Username == username);
        <span class="hljs-keyword">return</span> user ?? NotFound();
    }
}
</code></pre>
<h3 id="heading-use-cases-2"><strong>Use Cases</strong></h3>
<ul>
<li><p>Enterprise applications in Microsoft ecosystems</p>
</li>
<li><p>Organizations using Azure or Windows servers</p>
</li>
<li><p>Teams with C#/.NET expertise</p>
</li>
<li><p>Applications requiring integrated Visual Studio tooling</p>
</li>
</ul>
<h3 id="heading-strengths-2"><strong>Strengths</strong></h3>
<ul>
<li><p>First-class support from Microsoft with frequent updates</p>
</li>
<li><p>LINQ provides an intuitive and powerful way to query data</p>
</li>
<li><p>Excellent integration with Visual Studio's debugging and profiling tools</p>
</li>
<li><p>Strong performance characteristics with relatively low verbosity</p>
</li>
</ul>
<h3 id="heading-challenges-1"><strong>Challenges</strong></h3>
<ul>
<li><p>More resource-intensive than Node.js for equivalent workloads</p>
</li>
<li><p>Some EF Core features like lazy loading have quirks that require attention</p>
</li>
<li><p>The open-source community, while growing, is smaller than JavaScript's or Java's</p>
</li>
<li><p>Breaking changes between major versions can create migration headaches</p>
</li>
</ul>
<hr />
<h2 id="heading-head-to-head-comparison"><strong>Head-to-Head Comparison</strong></h2>
<p>After working with all three stacks on similar projects, here's my comparison of key aspects:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Spring Boot + JPA + Flyway</td><td>NestJS + Prisma</td><td>.NET Web API + EF Core</td></tr>
</thead>
<tbody>
<tr>
<td>Language</td><td>Java</td><td>TypeScript/JavaScript</td><td>C#</td></tr>
<tr>
<td>ORM</td><td>Hibernate (JPA)</td><td>Prisma</td><td>EF Core</td></tr>
<tr>
<td>Migrations</td><td>Flyway</td><td>Prisma Migrate</td><td>EF Migrations</td></tr>
<tr>
<td>Query Style</td><td>JPQL / Criteria</td><td>Fluent TypeScript API</td><td>LINQ</td></tr>
<tr>
<td>Ecosystem</td><td>Mature, Enterprise-heavy</td><td>Modern, Startup-friendly</td><td>Enterprise + Microsoft-focused</td></tr>
<tr>
<td>Learning Curve</td><td>Steep</td><td>Moderate</td><td>Moderate</td></tr>
<tr>
<td>Performance</td><td>High but verbose ORM</td><td>Fast and lightweight</td><td>High, strong optimization</td></tr>
<tr>
<td>Best For</td><td>Enterprise-scale, DDD</td><td>Startups, agile teams</td><td>Enterprises on Microsoft stack</td></tr>
</tbody>
</table>
</div><p>I've found that the learning curve for Spring Boot is the steepest, particularly for developers without prior Java experience. NestJS strikes a good balance of structure and approachability, while .NET benefits from C#'s relatively straightforward syntax combined with powerful language features.</p>
<p>When it comes to database interactions, all three provide abstractions, but with different philosophies:</p>
<ul>
<li><p><strong>JPA/Hibernate</strong> gives you the most control but requires more knowledge about how the ORM works under the hood</p>
</li>
<li><p><strong>Prisma</strong> is the most opinionated and "batteries-included" approach, making common operations extremely simple</p>
</li>
<li><p><strong>Entity Framework Core</strong> hits a nice balance with LINQ offering both simplicity and power</p>
</li>
</ul>
<p>For team productivity, I've noticed that teams can typically deliver features faster with NestJS + Prisma initially, but as applications grow in complexity, the structure provided by Spring Boot becomes increasingly valuable. .NET teams, especially those already familiar with Microsoft technologies, often maintain consistent productivity throughout the project lifecycle.</p>
<hr />
<h2 id="heading-real-world-case-study-building-a-user-management-system"><strong>Real-World Case Study: Building a User Management System</strong></h2>
<p>To make this comparison tangible, let's see how each stack would implement the same user management system. I'll walk through creating a system that supports:</p>
<ul>
<li><p>User registration (username + email + password)</p>
</li>
<li><p>Fetching users by username</p>
</li>
<li><p>Updating user information</p>
</li>
<li><p>Deleting a user</p>
</li>
</ul>
<p>This is a common requirement in many applications, and seeing the implementation differences will highlight the philosophy and approach of each stack.</p>
<h3 id="heading-1-spring-boot-jpa-flyway-implementation"><strong>1. Spring Boot + JPA + Flyway Implementation</strong></h3>
<h4 id="heading-database-migration-flyway"><strong>Database Migration (Flyway)</strong></h4>
<pre><code class="lang-sql"><span class="hljs-comment">-- V1__create_users_table.sql</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">users</span> (
  <span class="hljs-keyword">id</span> BIGSERIAL PRIMARY <span class="hljs-keyword">KEY</span>,
  username <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">50</span>) <span class="hljs-keyword">UNIQUE</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  email <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">100</span>) <span class="hljs-keyword">UNIQUE</span> <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  <span class="hljs-keyword">password</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">100</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>
);
</code></pre>
<h4 id="heading-entity"><strong>Entity</strong></h4>
<pre><code class="lang-java"><span class="hljs-meta">@Entity</span>
<span class="hljs-meta">@Table(name = "users")</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">User</span> </span>{
    <span class="hljs-meta">@Id</span>
    <span class="hljs-meta">@GeneratedValue(strategy = GenerationType.IDENTITY)</span>
    <span class="hljs-keyword">private</span> Long id;

    <span class="hljs-keyword">private</span> String username;
    <span class="hljs-keyword">private</span> String email;
    <span class="hljs-keyword">private</span> String password;
}
</code></pre>
<h4 id="heading-repository"><strong>Repository</strong></h4>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">UserRepository</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">JpaRepository</span>&lt;<span class="hljs-title">User</span>, <span class="hljs-title">Long</span>&gt; </span>{
    <span class="hljs-function">Optional&lt;User&gt; <span class="hljs-title">findByUsername</span><span class="hljs-params">(String username)</span></span>;
}
</code></pre>
<h4 id="heading-service"><strong>Service</strong></h4>
<pre><code class="lang-java"><span class="hljs-meta">@Service</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">UserService</span> </span>{
    <span class="hljs-meta">@Autowired</span>
    <span class="hljs-keyword">private</span> UserRepository repo;

    <span class="hljs-function"><span class="hljs-keyword">public</span> User <span class="hljs-title">register</span><span class="hljs-params">(User user)</span> </span>{
        <span class="hljs-keyword">return</span> repo.save(user);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> Optional&lt;User&gt; <span class="hljs-title">findByUsername</span><span class="hljs-params">(String username)</span> </span>{
        <span class="hljs-keyword">return</span> repo.findByUsername(username);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">deleteUser</span><span class="hljs-params">(Long id)</span> </span>{
        repo.deleteById(id);
    }
}
</code></pre>
<h4 id="heading-controller"><strong>Controller</strong></h4>
<pre><code class="lang-java"><span class="hljs-meta">@RestController</span>
<span class="hljs-meta">@RequestMapping("/users")</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">UserController</span> </span>{
    <span class="hljs-meta">@Autowired</span>
    <span class="hljs-keyword">private</span> UserService service;

    <span class="hljs-meta">@PostMapping</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> User <span class="hljs-title">register</span><span class="hljs-params">(<span class="hljs-meta">@RequestBody</span> User user)</span> </span>{
        <span class="hljs-keyword">return</span> service.register(user);
    }

    <span class="hljs-meta">@GetMapping("/{username}")</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> ResponseEntity&lt;User&gt; <span class="hljs-title">getUser</span><span class="hljs-params">(<span class="hljs-meta">@PathVariable</span> String username)</span> </span>{
        <span class="hljs-keyword">return</span> service.findByUsername(username)
                .map(ResponseEntity::ok)
                .orElse(ResponseEntity.notFound().build());
    }

    <span class="hljs-meta">@DeleteMapping("/{id}")</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">deleteUser</span><span class="hljs-params">(<span class="hljs-meta">@PathVariable</span> Long id)</span> </span>{
        service.deleteUser(id);
    }
}
</code></pre>
<p>Spring Boot's implementation follows a clear layered architecture with distinct responsibilities for the Repository (data access), Service (business logic), and Controller (API endpoints). The approach is verbose but very structured, making it easy for new team members to understand the codebase.</p>
<h3 id="heading-2-nestjs-prisma-implementation"><strong>2. NestJS + Prisma Implementation</strong></h3>
<h4 id="heading-prisma-schema"><strong>Prisma Schema</strong></h4>
<pre><code class="lang-plaintext">model User {
  id       Int     @id @default(autoincrement())
  username String  @unique
  email    String  @unique
  password String
}
</code></pre>
<p>Run migration:</p>
<pre><code class="lang-bash">npx prisma migrate dev --name init
</code></pre>
<h4 id="heading-service-1"><strong>Service</strong></h4>
<pre><code class="lang-typescript"><span class="hljs-meta">@Injectable</span>()
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> UserService {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">private</span> prisma: PrismaService</span>) {}

  <span class="hljs-keyword">async</span> register(data: { username: <span class="hljs-built_in">string</span>; email: <span class="hljs-built_in">string</span>; password: <span class="hljs-built_in">string</span> }) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.prisma.user.create({ data });
  }

  <span class="hljs-keyword">async</span> findByUsername(username: <span class="hljs-built_in">string</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.prisma.user.findUnique({ where: { username } });
  }

  <span class="hljs-keyword">async</span> deleteUser(id: <span class="hljs-built_in">number</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.prisma.user.delete({ where: { id } });
  }
}
</code></pre>
<h4 id="heading-controller-1"><strong>Controller</strong></h4>
<pre><code class="lang-typescript"><span class="hljs-meta">@Controller</span>(<span class="hljs-string">'users'</span>)
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> UserController {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> userService: UserService</span>) {}

  <span class="hljs-meta">@Post</span>()
  <span class="hljs-keyword">async</span> register(<span class="hljs-meta">@Body</span>() data: <span class="hljs-built_in">any</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.userService.register(data);
  }

  <span class="hljs-meta">@Get</span>(<span class="hljs-string">':username'</span>)
  <span class="hljs-keyword">async</span> getUser(<span class="hljs-meta">@Param</span>(<span class="hljs-string">'username'</span>) username: <span class="hljs-built_in">string</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.userService.findByUsername(username);
  }

  <span class="hljs-meta">@Delete</span>(<span class="hljs-string">':id'</span>)
  <span class="hljs-keyword">async</span> deleteUser(<span class="hljs-meta">@Param</span>(<span class="hljs-string">'id'</span>) id: <span class="hljs-built_in">string</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.userService.deleteUser(<span class="hljs-built_in">Number</span>(id));
  }
}
</code></pre>
<p>The NestJS implementation is more concise while still maintaining a clear structure inspired by Angular. The Prisma schema is particularly elegant, defining both the database structure and TypeScript types in a single file. The service methods are clean and to the point, leveraging TypeScript's async/await pattern.</p>
<h3 id="heading-3-net-web-api-ef-core-implementation"><strong>3. .NET Web API + EF Core Implementation</strong></h3>
<h4 id="heading-model"><strong>Model</strong></h4>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">User</span> {
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> Id { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Username { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Email { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Password { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
}
</code></pre>
<h4 id="heading-dbcontext"><strong>DbContext</strong></h4>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">AppDbContext</span> : <span class="hljs-title">DbContext</span> {
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">AppDbContext</span>(<span class="hljs-params">DbContextOptions&lt;AppDbContext&gt; options</span>) : <span class="hljs-title">base</span>(<span class="hljs-params">options</span>)</span> { }
    <span class="hljs-keyword">public</span> DbSet&lt;User&gt; Users { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}
</code></pre>
<h4 id="heading-migration"><strong>Migration</strong></h4>
<pre><code class="lang-bash">dotnet ef migrations add InitialCreate
dotnet ef database update
</code></pre>
<h4 id="heading-controller-2"><strong>Controller</strong></h4>
<pre><code class="lang-csharp">[<span class="hljs-meta">ApiController</span>]
[<span class="hljs-meta">Route(<span class="hljs-meta-string">"api/[controller]"</span>)</span>]
<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">UsersController</span> : <span class="hljs-title">ControllerBase</span> {
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> AppDbContext _context;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">UsersController</span>(<span class="hljs-params">AppDbContext context</span>)</span> {
        _context = context;
    }

    [<span class="hljs-meta">HttpPost</span>]
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ActionResult&lt;User&gt;&gt; Register(User user) {
        _context.Users.Add(user);
        <span class="hljs-keyword">await</span> _context.SaveChangesAsync();
        <span class="hljs-keyword">return</span> user;
    }

    [<span class="hljs-meta">HttpGet(<span class="hljs-meta-string">"{username}"</span>)</span>]
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ActionResult&lt;User&gt;&gt; GetUser(<span class="hljs-keyword">string</span> username) {
        <span class="hljs-keyword">var</span> user = <span class="hljs-keyword">await</span> _context.Users.FirstOrDefaultAsync(u =&gt; u.Username == username);
        <span class="hljs-keyword">return</span> user == <span class="hljs-literal">null</span> ? NotFound() : Ok(user);
    }

    [<span class="hljs-meta">HttpDelete(<span class="hljs-meta-string">"{id}"</span>)</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;IActionResult&gt; <span class="hljs-title">DeleteUser</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> id</span>)</span> {
        <span class="hljs-keyword">var</span> user = <span class="hljs-keyword">await</span> _context.Users.FindAsync(id);
        <span class="hljs-keyword">if</span> (user == <span class="hljs-literal">null</span>) <span class="hljs-keyword">return</span> NotFound();

        _context.Users.Remove(user);
        <span class="hljs-keyword">await</span> _context.SaveChangesAsync();
        <span class="hljs-keyword">return</span> NoContent();
    }
}
</code></pre>
<p>The .NET implementation strikes a balance between verbosity and expressiveness. The C# property syntax is concise, and the controller handles both data access and business logic in this simple example. In a more complex application, you would likely extract a service layer as well.</p>
<h3 id="heading-side-by-side-insights"><strong>Side-by-Side Insights</strong></h3>
<ul>
<li><p><strong>Spring Boot + JPA + Flyway</strong>: More boilerplate, but great for structured, enterprise projects.</p>
</li>
<li><p><strong>NestJS + Prisma</strong>: Concise, type-safe, developer-friendly. Perfect for startups.</p>
</li>
<li><p><strong>.NET Web API + EF Core</strong>: Balanced verbosity and power, shines in Microsoft ecosystems.</p>
</li>
</ul>
<h3 id="heading-my-experience-using-these-tech-stacks"><strong>My Experience Using These Tech Stacks</strong></h3>
<p>Working with these three stacks across different projects has given me valuable hands-on insights into their strengths and use cases.</p>
<p>With Spring Boot, We built a major financial services product using a microservices architecture. While the initial learning curve was steep, the investment paid off in a robust system that scaled well as transaction volumes grew. The explicit nature of the code meant fewer surprises during maintenance phases, and the mature ecosystem provided solutions for most challenges we encountered.</p>
<p>Using NestJS with Prisma, I developed a backend for a startup's mobile app. The development speed was remarkable - we went from concept to functional API in weeks rather than months. The TypeScript integration and Prisma's database tools were particularly impressive, allowing us to iterate quickly as the startup's requirements evolved.</p>
<p>For an internal device farm backend, We chose .NET Web API with Entity Framework Core. The seamless integration with other Microsoft tools in our environment made this choice practical. LINQ queries simplified complex data operations, and the strong C# type system helped prevent many common bugs before they reached production.</p>
<p>What became clear across these projects is that each stack has its sweet spot. The right choice depends on your specific requirements, team expertise, and organizational context.</p>
<hr />
<h2 id="heading-final-thoughts-what-ive-learned-about-stack-selection"><strong>Final Thoughts: What I've Learned About Stack Selection</strong></h2>
<p>Having worked with all three of these stacks across various projects, I've developed some perspective on choosing between them. While I don't claim to be an expert in all these technologies, I'd like to share what I've observed from both my application development and automation work.</p>
<h3 id="heading-my-decision-making-framework"><strong>My Decision-Making Framework</strong></h3>
<p>Here's a simple framework I use when deciding which stack might be most suitable for a project:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>If You're Looking For...</td><td>I've Found This Works Well...</td><td>Based On My Experience...</td></tr>
</thead>
<tbody>
<tr>
<td>Long-term maintainability</td><td>Spring Boot + JPA + Flyway</td><td>Worked well for enterprise projects where we needed stability</td></tr>
<tr>
<td>Fast development</td><td>NestJS + Prisma</td><td>Helped deliver POCs and MVPs quickly when time was tight</td></tr>
<tr>
<td>Microsoft ecosystem integration</td><td>.NET Web API + EF Core</td><td>Made sense when working with systems already using Azure and other MS tools</td></tr>
<tr>
<td>Existing team skills</td><td>Match stack with what people know</td><td>I've seen projects fail when forcing a "better" tech that nobody knows</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-developers-reflections-my-personal-journey-with-these-stacks"><strong>Developer's Reflections: My Personal Journey with These Stacks</strong></h2>
<p>My journey with these technologies has been one of continuous learning and exploration. I want to share some honest reflections from my experiences with each stack.</p>
<h3 id="heading-spring-boot-learning-curve-adventures"><strong>Spring Boot: Learning Curve Adventures</strong></h3>
<p>My introduction to Spring Boot came after working with plain Java for several years. The difference was striking - where I had previously spent days configuring XML files and wrestling with application servers, Spring Boot let me focus on business logic.</p>
<p>Those first weeks involved a steep learning curve though! The complexity of dependency injection and the Spring ecosystem had me searching through documentation and Stack Overflow late into the night. I distinctly remember staring at a cryptic "bean not found" exception for hours before realizing I'd forgotten a simple annotation.</p>
<p>But eventually, things started to click. Building applications became more intuitive, and I grew to appreciate the structure that Spring Boot enforced.</p>
<h3 id="heading-nestjs-prisma-a-pleasant-surprise"><strong>NestJS + Prisma: A Pleasant Surprise</strong></h3>
<p>When I needed to create a dashboard for our test metrics, I decided to try NestJS with Prisma based on a colleague's recommendation. As someone who occasionally worked with Node.js for test scripting, I was curious but skeptical.</p>
<p>Setting up was surprisingly smooth. The TypeScript support was a game-changer for someone like me who values type safety when building test frameworks. The moment that stands out was watching Prisma generate models from our existing test database - it felt like magic compared to the manual ORM mapping I was used to.</p>
<p>The project that would have taken me months with Java was functional in weeks, giving me a new appreciation for what the right tools can do for productivity.</p>
<h3 id="heading-net-breaking-out-of-my-comfort-zone"><strong>.NET: Breaking Out of My Comfort Zone</strong></h3>
<p>I avoided .NET for years, partly due to my background in open source tools. When a client project required it, I reluctantly agreed to build a test framework using .NET Web API and EF Core.</p>
<p>The Visual Studio experience was much better than I expected. LINQ queries made generating test data sets a breeze, and C# felt like a natural evolution from Java. The integration with Azure DevOps for our test pipeline was seamless.</p>
<p>A senior .NET developer on the team showed me patterns for mocking and testing I hadn't seen before, which I've since applied to other projects regardless of language.</p>
<h3 id="heading-what-ive-taken-away"><strong>What I've Taken Away</strong></h3>
<p>My journey with these stacks has taught me that being too dogmatic about technology choices limits growth. Each stack has introduced me to concepts and practices that have made me a better test automation engineer.</p>
<p>For others with a similar background looking to expand their skills:</p>
<ul>
<li><p>Spring Boot taught me architectural discipline</p>
</li>
<li><p>NestJS showed me the value of developer experience</p>
</li>
<li><p>.NET reminded me not to dismiss technologies without trying them</p>
</li>
</ul>
<p>I'm still learning and exploring all three of these stacks. Rather than becoming an expert in one, I've found value in understanding the different approaches each takes to solving similar problems. This perspective has been invaluable when designing test frameworks that need to work across different application architectures.</p>
<p>What about you? If you've worked with these stacks, especially from a testing or automation perspective, I'd love to hear about your experiences in the comments.</p>
<hr />
<h2 id="heading-further-reading-amp-resources"><strong>Further Reading &amp; Resources</strong></h2>
<p>To deepen your understanding of these stacks, here are some excellent resources that have helped me along my journey:</p>
<h3 id="heading-spring-boot-jpa-flyway"><strong>Spring Boot / JPA / Flyway</strong></h3>
<ul>
<li><p><em>Spring Boot Reference Documentation</em> (<a target="_blank" href="https://docs.spring.io/spring-boot/docs/current/reference/html/">Spring.io</a>)</p>
</li>
<li><p><a target="_blank" href="https://codesignal.com/learn/courses/persisting-data-with-spring-data-jpa/lessons/introduction-to-spring-data-jpa"><em>Getting Started with Spring Data JPA</em></a></p>
</li>
<li><p><em>Database Migrations with Flyway</em> (<a target="_blank" href="https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto.data-initialization.migration-tool.flyway">Spring.io</a>)</p>
</li>
<li><p><em>Designing a REST API with Spring Boot</em> (<a target="_blank" href="https://spring.io/guides/tutorials/rest/">Spring.io Guides</a>)</p>
</li>
</ul>
<h3 id="heading-nestjs-prisma"><strong>NestJS / Prisma</strong></h3>
<ul>
<li><p><em>NestJS Official Documentation</em> (<a target="_blank" href="https://docs.nestjs.com/">docs.nestjs.com</a>)</p>
</li>
<li><p><em>Prisma Getting Started Guide</em> (<a target="_blank" href="https://www.prisma.io/docs/getting-started">Prisma.io</a>)</p>
</li>
<li><p><em>Building REST APIs with NestJS</em> (<a target="_blank" href="https://docs.nestjs.com/controllers">NestJS.com</a>)</p>
</li>
<li><p><a target="_blank" href="https://www.prisma.io/blog/nestjs-prisma-rest-api-7D056s1BmOL0"><em>Building a REST API with NestJS and Prisma</em></a></p>
</li>
</ul>
<h3 id="heading-net-web-api-ef-core"><strong>.NET Web API / EF Core</strong></h3>
<ul>
<li><p><em>ASP.NET Core Web API Tutorial</em> (<a target="_blank" href="https://learn.microsoft.com/en-us/aspnet/core/web-api/">Microsoft Learn</a>)</p>
</li>
<li><p><em>Entity Framework Core Documentation</em> (<a target="_blank" href="https://learn.microsoft.com/en-us/ef/core/">Microsoft Learn</a>)</p>
</li>
<li><p><em>Building RESTful Services with .NET</em> (<a target="_blank" href="https://learn.microsoft.com/en-us/aspnet/core/tutorials/first-web-api">Microsoft Learn</a>)</p>
</li>
<li><p><a target="_blank" href="https://medium.com/%40ravipatel.it/a-beginners-guide-to-entity-framework-core-ef-core-5cde48fc7f7a"><em>A Beginner’s Guide to Entity Framework Core (EF Core)</em></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Transforming Our Automation Suite: From Challenges to Solutions]]></title><description><![CDATA[Introduction:
Test Automation is critical to modern software development, driving efficiency and faster feedback for API and UI testing. For our team, it started as a well-planned effort, but over time, our automation suite grew out of control—especi...]]></description><link>https://blog.rakeshvardan.com/transforming-our-automation-suite-from-challenges-to-solutions</link><guid isPermaLink="true">https://blog.rakeshvardan.com/transforming-our-automation-suite-from-challenges-to-solutions</guid><category><![CDATA[test automation framework]]></category><category><![CDATA[maintainability]]></category><category><![CDATA[best practices]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test pyramids]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Wed, 22 Jan 2025 01:45:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bJhT_8nbUA0/upload/05296e7bdd833358ef8ba35fe113c954.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction:</h2>
<p>Test Automation is critical to modern software development, driving efficiency and faster feedback for API and UI testing. For our team, it started as a well-planned effort, but over time, our automation suite grew out of control—especially the UI tests built on Selenium WebDriver. With increasing delivery pressures, missing governance, and process gaps, we found ourselves dealing with an unmanageable regression suite, plagued by long execution times and flakiness.</p>
<p>This blog will share the lessons we learned, the challenges we faced, and the comprehensive steps we started taking to fix our automation framework. While we are still in the middle of this transformation, we have already seen improvements in the overall status of our regression suite.</p>
<hr />
<h2 id="heading-the-problem-growing-automation-pains">The Problem: Growing Automation Pains</h2>
<p>In the early stages, we practised <strong>in-sprint automation</strong>, and the approach worked seamlessly for some time. However, as our application and feature set expanded, so did our automation suite. We started encountering several significant issues:</p>
<ol>
<li><p><strong>Test Suite Bloat</strong>:</p>
<ul>
<li><p>As the number of tests increased, so did the complexity and size of our suite, leading to longer execution times.</p>
</li>
<li><p>The reliance on Selenium-based UI tests made the suite especially prone to flakiness.</p>
</li>
</ul>
</li>
<li><p><strong>Flaky UI Tests</strong>:</p>
<ul>
<li>Timing issues, dynamic elements, and inconsistent environments resulted in frequent false positives, eroding trust in automation.</li>
</ul>
</li>
<li><p><strong>Lack of Governance</strong>:</p>
<ul>
<li>Without a clear test review process, redundant and low-value tests were added without fully understanding their impact on the suite.</li>
</ul>
</li>
<li><p><strong>Maintenance Overload</strong>:</p>
<ul>
<li>The growing complexity meant we were spending more time on test maintenance and troubleshooting, often at the cost of writing new tests.</li>
</ul>
</li>
<li><p><strong>Environmental Challenges</strong>:</p>
<ul>
<li>Ensuring a stable, consistent environment for test execution became difficult, leading to sporadic failures.</li>
</ul>
</li>
</ol>
<p>Recognizing that our test suite had become more of a burden than a help, we initiated a major overhaul to address these issues.</p>
<hr />
<h2 id="heading-the-turning-point-addressing-root-causes">The Turning Point: Addressing Root Causes</h2>
<p>We took a step back to analyze the situation and identify the root causes of our problems. Here’s what we found:</p>
<ol>
<li><p><strong>Over-reliance on UI Tests</strong>: Many validations were happening at the UI level when they could have been handled more effectively at the API or unit test level.</p>
</li>
<li><p><strong>Redundant Tests Across Layers</strong>: Several tests duplicated functionality already covered by API or unit tests, making them unnecessary.</p>
</li>
<li><p><strong>Inconsistent Test Design</strong>: The lack of modularity in test design meant that even minor changes to the application resulted in large-scale test failures, increasing maintenance costs.</p>
</li>
<li><p><strong>Flaky Tests with No Immediate Fixes</strong>: Flaky tests were often ignored or rerun without addressing the underlying causes, which only compounded the issue over time.</p>
</li>
</ol>
<hr />
<h2 id="heading-the-fix-transforming-our-automation-strategy">The Fix: Transforming Our Automation Strategy</h2>
<p>We knew that to regain control, we had to make some fundamental changes. While we are still in the process of implementing these improvements, here are the key steps we have undertaken so far:</p>
<h3 id="heading-1-migrating-ui-tests-from-selenium-to-selenide">1. <strong>Migrating UI Tests from Selenium to Selenide</strong></h3>
<p>One of our first major initiatives was moving our UI tests from <a target="_blank" href="https://www.selenium.dev/"><strong>Selenium WebDriver</strong></a> to <a target="_blank" href="https://selenide.org/"><strong>Selenide</strong></a>. This shift provided several immediate benefits:</p>
<ul>
<li><p><strong>Increased Stability</strong>: Selenide’s built-in synchronization and smart waiting mechanisms drastically reduced flakiness, making tests more reliable.</p>
</li>
<li><p><strong>Cleaner and Concise Code</strong>: The Selenide framework allowed us to write more concise and readable tests, cutting down on boilerplate code.</p>
</li>
<li><p><strong>Additional Features</strong>: Selenide offers features like automatic screenshots on failure, simpler handling of dynamic elements, and improved support for file downloads and uploads, which enhanced the overall robustness of our test scripts.</p>
</li>
</ul>
<p>Although we’re still migrating tests, we have already seen improvements in the stability and reliability of the suite.</p>
<p><a target="_blank" href="https://selenide.org/documentation/selenide-vs-selenium.html">Selenide vs Selenium</a></p>
<h3 id="heading-2-creating-a-modular-page-object-library-with-maven">2. <strong>Creating a Modular Page Object Library with Maven</strong></h3>
<p>To address the issue of maintainability, we created a separate <strong>Maven module</strong> dedicated to our <strong>Page Object Model (POM)</strong>. This allowed us to:</p>
<ul>
<li><p><strong>Reuse Across Environments</strong>: By creating a modular design, we can use the same Page Object library for both our <strong>stage</strong> and <strong>production</strong> environments, ensuring consistency.</p>
</li>
<li><p><strong>Simplified Updates</strong>: Any changes to the UI can be updated in the Page Object module without needing to touch the main test suite, making maintenance easier.</p>
</li>
</ul>
<h3 id="heading-3-test-data-service-using-spring-boot">3. <strong>Test Data Service Using Spring Boot</strong></h3>
<p>One of the challenges we faced was the complexity of managing test data. To simplify this, we built a <strong>Test Data Service using</strong> <a target="_blank" href="https://spring.io/projects/spring-boot"><strong>Spring Boot</strong></a>, which generates the necessary test data through an abstracted business layer:</p>
<ul>
<li><p><strong>Simplified API Interaction</strong>: Instead of making multiple backend API calls directly from the test scripts, our test data service acts as a central hub, orchestrating API calls and returning the required data in a clean, standardized format.</p>
</li>
<li><p><strong>Consistency Across Environments</strong>: This service helps ensure that test data remains consistent across environments, reducing flakiness due to data discrepancies.</p>
</li>
</ul>
<p>Although we’re still fine-tuning this service, the improvement in test data consistency has been noticeable.</p>
<h3 id="heading-4-test-execution-using-selenoid-amp-ggr">4. <strong>Test Execution Using Selenoid &amp; Ggr</strong></h3>
<p>To improve our execution pipeline further, we adopted <a target="_blank" href="https://aerokube.com/selenoid/latest/"><strong>Selenoid</strong></a> and <a target="_blank" href="https://aerokube.com/ggr/latest/"><strong>Ggr</strong></a> from <a target="_blank" href="https://aerokube.com/"><strong>Aerokube</strong></a>. These tools allowed us to run tests in browser <strong>Docker containers</strong>:</p>
<ul>
<li><p><strong>Parallel Execution</strong>: By running tests in parallel across multiple containers, we’ve already begun reducing our total execution time.</p>
</li>
<li><p><strong>Efficient Resource Utilization</strong>: Selenoid provides lightweight, on-demand browser instances, allowing us to run tests on different browsers without maintaining a complex infrastructure.</p>
</li>
<li><p><strong>Scalability with Ggr</strong>: Ggr enables us to scale our test execution across multiple machines, improving overall throughput and test execution efficiency.</p>
</li>
</ul>
<h3 id="heading-5-optimizing-the-test-pyramid">5. <strong>Optimizing the Test Pyramid</strong></h3>
<p>We realigned our test strategy to follow a <strong>test pyramid</strong> approach:</p>
<ul>
<li><p><strong>API and Unit Tests as Priority</strong>: We moved as many tests as possible from the UI layer to the API and unit test levels. API tests are faster and more reliable, providing quicker feedback.</p>
</li>
<li><p><strong>Reducing UI Tests</strong>: UI tests were reserved only for critical end-to-end workflows, minimizing their impact on test execution time and stability.</p>
</li>
</ul>
<p>This shift is ongoing, but the reduced emphasis on UI testing has already helped reduce execution time.</p>
<h3 id="heading-6-integrating-with-reportportal-for-smarter-failure-analysis">6. <strong>Integrating with ReportPortal for Smarter Failure Analysis</strong></h3>
<p>To streamline our test result analysis, we integrated <a target="_blank" href="https://reportportal.io/"><strong>ReportPortal</strong></a>:</p>
<ul>
<li><p><strong>Enhanced Reporting and Triaging</strong>: ReportPortal’s comprehensive reporting allows us to track test results over time, making it easier to identify and prioritize issues.</p>
</li>
<li><p><strong>AI-Driven Failure Analysis</strong>: With ReportPortal’s AI-powered analysis, we can automatically detect patterns in test failures and categorize them for easier triaging. This significantly speeds up our debugging process and improves the quality of our automation.</p>
</li>
</ul>
<hr />
<h2 id="heading-key-lessons-learned">Key Lessons Learned</h2>
<ol>
<li><p><strong>Modularization Is Essential</strong>: Separating concerns into distinct modules, like the Page Object library and test data service, makes the test framework more maintainable and easier to extend.</p>
</li>
<li><p><strong>Don’t Overload the UI Layer</strong>: Focusing too much on UI tests leads to longer execution times and increased flakiness. A balanced test pyramid with more API and unit tests improves both speed and stability.</p>
</li>
<li><p><strong>Tools Matter</strong>: Migrating to Selenide and leveraging tools like Selenoid and Ggr improved the reliability and efficiency of our test suite by providing better browser management and parallel execution capabilities.</p>
</li>
<li><p><strong>Data Management Is Critical</strong>: The Test Data Service helped streamline data management, making tests less dependent on direct API interactions and ensuring data consistency.</p>
</li>
<li><p><strong>Smarter Failure Analysis</strong>: ReportPortal’s AI-powered failure analysis has started saving us significant time in debugging and improving our ability to fix flaky tests and recurring issues.</p>
</li>
</ol>
<hr />
<h2 id="heading-conclusion">Conclusion:</h2>
<p>Our automation journey is still ongoing, and while we haven’t yet completed all the changes, the improvements we’ve seen so far have been encouraging. Moving to <strong>Selenide</strong>, creating a <strong>Test Data Service</strong>, and using <strong>Selenoid</strong> for browser execution have already contributed to more stable and faster test runs. The integration with <strong>ReportPortal</strong> is helping us catch and fix issues more efficiently, even as we continue to optimize our test suite.</p>
<p>The key takeaway is that while the road to improving automation may be challenging, incremental improvements can lead to significant results over time. We are excited to continue on this journey and look forward to achieving a fully optimized and maintainable automation framework soon!</p>
<p>Happy Testing!</p>
]]></content:encoded></item><item><title><![CDATA[How to Utilize Java 'Records' in Test Automation]]></title><description><![CDATA[Introduction
Writing clean, maintainable test automation code is crucial for ensuring long-term project success. One of the challenges test engineers face is managing the repetitive, boilerplate code needed for data models in test cases. Fortunately,...]]></description><link>https://blog.rakeshvardan.com/how-to-utilize-java-records-in-test-automation</link><guid isPermaLink="true">https://blog.rakeshvardan.com/how-to-utilize-java-records-in-test-automation</guid><category><![CDATA[Boilerplate Code]]></category><category><![CDATA[Test Data Models]]></category><category><![CDATA[java record]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[lombok]]></category><category><![CDATA[automation frameworks]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Mon, 14 Oct 2024 14:50:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4f_Kk-AYf64/upload/cb352a0a9fa41aae44d0fbaf9c50198c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Writing clean, maintainable test automation code is crucial for ensuring long-term project success. One of the challenges test engineers face is managing the repetitive, boilerplate code needed for data models in test cases. Fortunately, <strong>Java 14</strong> introduced a <a target="_blank" href="https://openjdk.org/jeps/395">preview feature</a> called '<strong>Records</strong>', which was finalized in <strong>Java 16</strong>. Records provide a succinct and expressive way to define immutable data carriers, significantly reducing the amount of boilerplate code.</p>
<p>In this blog, we’ll explore how <a target="_blank" href="https://docs.oracle.com/en/java/javase/17/language/records.html">Java Records</a> can enhance your test automation, making your code more concise, readable, and maintainable. We will also compare Records with <a target="_blank" href="https://projectlombok.org/">Lombok</a>, a popular library used for reducing boilerplate code in earlier Java versions.</p>
<p><em>For different examples of using Java Records with real-time APIs in test automation, you can check out my GitHub project</em> <a target="_blank" href="https://github.com/rakesh-vardan/java-records-lombok"><em>here</em></a><em>.</em></p>
<h2 id="heading-understanding-java-records">Understanding Java Records</h2>
<p>Java Records are a special kind of class in Java designed to act as immutable data carriers. They can be thought of as nominal tuples, containing immutable fields where the compiler automatically generates methods like <code>equals()</code>, <code>hashCode()</code>, <code>toString()</code>, and accessors. This makes Records particularly suited for use cases where you need to store and retrieve data without much additional logic.</p>
<h3 id="heading-key-features-of-java-records"><strong>Key Features of Java Records:</strong></h3>
<ul>
<li><p><strong>Immutability</strong>: Fields in Records are final, ensuring that the data can't be changed once created.</p>
</li>
<li><p><strong>Generated Methods</strong>: The Java compiler automatically provides implementations for <code>equals()</code>, <code>hashCode()</code>, and <code>toString()</code> methods.</p>
</li>
<li><p><strong>Accessor Methods</strong>: Instead of traditional getters, Records use accessor methods that match the field names.</p>
</li>
</ul>
<h3 id="heading-example-of-java-record"><strong>Example of Java Record</strong></h3>
<p>Here’s a simple example of a <code>User</code> record:</p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">User</span><span class="hljs-params">(String username, String password)</span> </span>{}
</code></pre>
<p>This one line of code is equivalent to writing a traditional class with private final fields, a constructor, and several utility methods like <code>equals()</code> and <code>hashCode()</code>.</p>
<h2 id="heading-why-use-java-records-in-test-automation">Why Use Java Records in Test Automation?</h2>
<p>Test automation often involves setting up test data models, such as user credentials, API requests, or database entities. These models tend to be simple data holders without much business logic, making them perfect candidates for Records.</p>
<h3 id="heading-benefits-of-java-records-in-test-automation"><strong>Benefits of Java Records in Test Automation:</strong></h3>
<ul>
<li><p><strong>Reduced Boilerplate</strong>: Simplifies the code by auto-generating common methods, eliminating the need for manual coding of accessors and other methods.</p>
</li>
<li><p><strong>Improved Readability</strong>: Compact syntax helps focus on the logic of the test rather than unnecessary class definitions.</p>
</li>
<li><p><strong>Ease of Maintenance</strong>: Less code means fewer points of failure and easier maintenance over time.</p>
</li>
</ul>
<h3 id="heading-example-simplifying-test-code"><strong>Example: Simplifying Test Code</strong></h3>
<p>Let’s consider a test scenario for verifying login functionality. Here’s how you might implement it without using Records:</p>
<h4 id="heading-without-records">Without Records</h4>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">User</span> </span>{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> String username;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> String password;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">User</span><span class="hljs-params">(String username, String password)</span> </span>{
        <span class="hljs-keyword">this</span>.username = username;
        <span class="hljs-keyword">this</span>.password = password;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">getUsername</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> username;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">getPassword</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> password;
    }
}
</code></pre>
<p>And use it in a test like this:</p>
<pre><code class="lang-java"><span class="hljs-meta">@Test</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">testLogin</span><span class="hljs-params">()</span> </span>{
    User user = <span class="hljs-keyword">new</span> User(<span class="hljs-string">"testuser"</span>, <span class="hljs-string">"testpassword"</span>);
    loginPage.login(user.getUsername(), user.getPassword());
    assertTrue(dashboardPage.isLoggedIn());
}
</code></pre>
<h4 id="heading-with-records">With Records</h4>
<p>By using Records, the code becomes much more concise:</p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">User</span><span class="hljs-params">(String username, String password)</span> </span>{}
</code></pre>
<p>And use it in a test like this:</p>
<pre><code class="lang-java"><span class="hljs-meta">@Test</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">testLogin</span><span class="hljs-params">()</span> </span>{
    User user = <span class="hljs-keyword">new</span> User(<span class="hljs-string">"testuser"</span>, <span class="hljs-string">"testpassword"</span>);
    loginPage.login(user.username(), user.password());
    assertTrue(dashboardPage.isLoggedIn());
}
</code></pre>
<p>In this case, the test logic remains the same, but the code is simplified, making it easier to read and maintain.</p>
<h2 id="heading-real-world-example-testing-an-e-commerce-application">Real-World Example: Testing an E-commerce Application</h2>
<p>Consider a more complex test scenario in which we’re testing an e-commerce application. Here, we need to represent a <code>Order</code> containing an order ID, user details, and a list of items.</p>
<p><strong>Without Records</strong></p>
<p>Without using Records (or Lombok), the <code>Order</code> class might look like this:</p>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Order</span> </span>{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> String id;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> User user;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> List&lt;Item&gt; items;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Order</span><span class="hljs-params">(String id, User user, List&lt;Item&gt; items)</span> </span>{
        <span class="hljs-keyword">this</span>.id = id;
        <span class="hljs-keyword">this</span>.user = user;
        <span class="hljs-keyword">this</span>.items = items;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">getId</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> id;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> User <span class="hljs-title">getUser</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> user;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> List&lt;Item&gt; <span class="hljs-title">getItems</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> items;
    }
}
</code></pre>
<p><strong>With Records</strong></p>
<p>Using Records simplifies this model dramatically:</p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">Order</span><span class="hljs-params">(String id, User user, List&lt;Item&gt; items)</span> </span>{}
</code></pre>
<p>In a test, you could then represent and work with <code>Order</code> like this:</p>
<pre><code class="lang-java"><span class="hljs-meta">@Test</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">testOrderProcessing</span><span class="hljs-params">()</span> </span>{
    User user = <span class="hljs-keyword">new</span> User(<span class="hljs-string">"testuser"</span>, <span class="hljs-string">"testpassword"</span>);
    List&lt;Item&gt; items = Arrays.asList(<span class="hljs-keyword">new</span> Item(<span class="hljs-string">"item1"</span>, <span class="hljs-number">2</span>), <span class="hljs-keyword">new</span> Item(<span class="hljs-string">"item2"</span>, <span class="hljs-number">1</span>));
    Order order = <span class="hljs-keyword">new</span> Order(<span class="hljs-string">"order1"</span>, user, items);

    orderPage.placeOrder(order);
    assertTrue(orderPage.isOrderPlaced(order.id()));
}
</code></pre>
<h2 id="heading-comparing-java-records-with-lombok">Comparing Java Records with Lombok</h2>
<p>Before Java <a target="_blank" href="https://docs.oracle.com/en/java/javase/17/language/records.html">Records</a> were introduced, developers often turned to <a target="_blank" href="https://projectlombok.org/">Lombok</a> to reduce boilerplate code. Lombok provides annotations like <code>@Data</code>, <code>@Getter</code>, and <code>@Setter</code>, which automatically generates common methods at compile time. However, Records, being a built-in language feature, come with their advantages and trade-offs.</p>
<h3 id="heading-key-differences">Key Differences:</h3>
<ul>
<li><p><strong>Java Version</strong>:</p>
<ul>
<li><p><strong>Records</strong>: Available from Java 16 onward.</p>
</li>
<li><p><strong>Lombok</strong>: Compatible with older Java versions.</p>
</li>
</ul>
</li>
<li><p><strong>Immutability</strong>:</p>
<ul>
<li><p><strong>Records</strong>: Fields are inherently final, ensuring immutability.</p>
</li>
<li><p><strong>Lombok</strong>: Can generate mutable objects unless explicitly marked with <code>@Value</code> for immutability.</p>
</li>
</ul>
</li>
<li><p><strong>Dependencies</strong>:</p>
<ul>
<li><p><strong>Records</strong>: Native to Java—no external dependencies.</p>
</li>
<li><p><strong>Lombok</strong>: Requires adding an external library to your project.</p>
</li>
</ul>
</li>
<li><p><strong>Tooling Support</strong>:</p>
<ul>
<li><p><strong>Records</strong>: Supported by most modern IDEs and tools since they are part of the core Java language.</p>
</li>
<li><p><strong>Lombok</strong>: Some IDEs may have compatibility issues due to Lombok's bytecode manipulation.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-when-to-use-records-vs-lombok">When to Use Records vs. Lombok:</h3>
<ul>
<li><p><strong>Use Records if</strong>:</p>
<ul>
<li><p>You're using Java 16 or later.</p>
</li>
<li><p>You need simple, immutable data carriers with minimal custom behaviour.</p>
</li>
</ul>
</li>
<li><p><strong>Use Lombok if</strong>:</p>
<ul>
<li><p>You’re working with older Java versions.</p>
</li>
<li><p>You need mutable objects or more flexibility in the generated code.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Java Records are a powerful tool for test automation, especially when working with simple, immutable data models. They reduce boilerplate code, enhance readability, and make tests easier to maintain. While Lombok remains a great option for those using earlier Java versions or needing additional flexibility, Records provides a cleaner, more efficient approach for modern Java applications.</p>
<p>When writing test automation in Java 16 or later, consider using Records to simplify your data model representations and improve the overall maintainability of your codebase.</p>
]]></content:encoded></item><item><title><![CDATA[How I Improved Productivity with 'Regain' by Reducing Screen Time]]></title><description><![CDATA[In today’s fast-paced, digital-first world, it’s all too easy to lose control of your time. As a software engineer, my day revolves around optimizing processes, but when it comes to my personal time management, I find myself slipping. Hours were lost...]]></description><link>https://blog.rakeshvardan.com/how-i-improved-productivity-with-regain-by-reducing-screen-time</link><guid isPermaLink="true">https://blog.rakeshvardan.com/how-i-improved-productivity-with-regain-by-reducing-screen-time</guid><category><![CDATA[Productivity]]></category><category><![CDATA[Time management]]></category><category><![CDATA[Screen Time]]></category><category><![CDATA[mindfulness]]></category><category><![CDATA[Digital well-being]]></category><category><![CDATA[Self Improvement ]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Sun, 13 Oct 2024 07:06:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/FbVYVJcrJTA/upload/54e22b1587b4272c8725899391c961a9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today’s fast-paced, digital-first world, it’s all too easy to lose control of your time. As a software engineer, my day revolves around optimizing processes, but when it comes to my personal time management, I find myself slipping. Hours were lost on Instagram reels, YouTube shorts, and endless LinkedIn scrolling. These moments of distraction had a profound effect on my productivity and mental well-being. That’s when I discovered <a target="_blank" href="https://regainapp.ai/"><em>Regain</em></a>.</p>
<h3 id="heading-the-turning-point">The Turning Point</h3>
<p>Recognizing that my productivity was taking a hit due to excessive screen time, I knew I needed to regain control. I decided to give <em>Regain</em> a try, a mobile app designed to manage and monitor mobile usage. The change was immediate—and it went beyond just limiting screen time.</p>
<h3 id="heading-how-regain-works">How "Regain" Works</h3>
<p>The app is built around empowering users to regain control over their digital habits. It offers features like daily time limits for specific apps, scheduled notifications, and usage tracking. But what sets it apart is how it addresses the psychological triggers that keep us hooked to our phones. Here’s how it worked for me:</p>
<ul>
<li><p><strong>Scheduled Notifications for Mindful Breaks</strong>: <em>Regain</em> allows me to schedule notifications that remind me to take breaks productively. Instead of aimlessly browsing social media during these breaks, I now spend that time on healthier activities—such as quick stretches, meditation, or a walk. This has helped me rewire how I approach relaxation, and it has had a noticeable effect on my mental clarity.</p>
</li>
<li><p><strong>Dopamine and Cortisol Control</strong>: The constant consumption of short, entertaining content—like reels and shorts—triggers dopamine spikes, which leads to the urge for more content, ultimately creating a cycle of addiction. At the same time, the stress of falling behind on work or personal goals releases cortisol, the "stress hormone." <em>Regain</em> has helped break this cycle. By limiting my screen time and reducing over-stimulation, I’ve been able to stabilize both dopamine and cortisol levels. This balance has not only improved my focus but also reduced feelings of anxiety and stress.</p>
</li>
</ul>
<h3 id="heading-my-experience">My Experience</h3>
<p>Before <em>Regain</em>, I wasn’t fully aware of how much time I was losing to mindless browsing. The app's ability to set daily usage limits has transformed my routine. Now, I’m greeted with a reminder when I approach my limits for apps like Instagram and YouTube, encouraging me to shift my focus back to work or more meaningful activities.</p>
<p>In just a few weeks, I've noticed profound changes—not just in my productivity but also in my overall mood. With less time spent on social media, I’ve found myself less agitated and more centred throughout the day.</p>
<h3 id="heading-key-features-that-made-a-difference">Key Features That Made a Difference</h3>
<ul>
<li><p><strong>Custom Time Limits</strong>: Setting daily caps on apps helped me curb unnecessary screen time. I could limit apps like YouTube and LinkedIn without having to cut them off entirely.</p>
</li>
<li><p><strong>Scheduled Notifications</strong>: This feature allowed me to receive timely reminders to pause and engage in more mindful activities, preventing burnout.</p>
</li>
<li><p><strong>Dopamine Regulation</strong>: By reducing quick-hit content consumption, I’ve become less dependent on constant entertainment and more focused on long-term goals.</p>
</li>
<li><p><strong>Cortisol Reduction</strong>: Less time spent in front of a screen helped reduce stress, allowing me to manage my workload without feeling overwhelmed.</p>
</li>
</ul>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>Since I started using <em>Regain</em>, I’ve regained control of both my screen time and mental well-being. Cutting back on social media and video reels has reduced my dopamine dependency, and in turn, my cortisol levels have decreased, leaving me feeling more balanced and less anxious. I’m more productive, focused, and able to handle the demands of my role with greater ease.</p>
<p>If you’re looking for a way to take charge of your time and improve your focus, I highly recommend trying <em>Regain</em>. For me, it has been the key to better time management, increased productivity, and improved mental health.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><em>As of this article, the </em><strong><em>Regain</em></strong><em> app seems to be only available for Android OS. iOS users may explore other similar apps such as Forest, Stay Focused, Breathe, etc.,</em></div>
</div>]]></content:encoded></item><item><title><![CDATA[Monitoring the Health of Micro-Services: How we do it?]]></title><description><![CDATA[In my recent project, we successfully integrated the API Health Checker Dashboard, developed by Osanda Deshan, a solution we have been using in our project for quite some time. This open-source solution provides real-time monitoring for the availabil...]]></description><link>https://blog.rakeshvardan.com/monitoring-the-health-of-micro-services-how-we-do-it</link><guid isPermaLink="true">https://blog.rakeshvardan.com/monitoring-the-health-of-micro-services-how-we-do-it</guid><category><![CDATA[Microservices monitoring]]></category><category><![CDATA[API health check]]></category><category><![CDATA[Open-source monitoring tools]]></category><category><![CDATA[NodeJS Monitoring]]></category><category><![CDATA[Backend Services]]></category><category><![CDATA[real-time monitoring]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Sun, 13 Oct 2024 06:09:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4dKy7d3lkKM/upload/b1a4cb997ad72b3827a9134b2b169c7a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my recent project, we successfully integrated the <a target="_blank" href="https://github.com/osandadeshan/api-health-checker-dashboard"><strong>API Health Checker Dashboard</strong></a>, developed by <a target="_blank" href="https://github.com/osandadeshan"><strong>Osanda Deshan</strong></a><strong>,</strong> a solution we have been using in our project for quite some time. This open-source solution provides real-time monitoring for the availability of backend and frontend services. This tool has significantly streamlined our ability to track and maintain the health of various APIs. I’d like to share my experience using it, along with a technical overview and a comparison to other solutions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728797589481/e076792f-0fbc-430b-b31f-01f96c5b4adf.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-need-for-an-on-demand-api-health-check-page">The Need for an On-Demand API Health Check Page</h3>
<p>In our project, managing a growing number of backend services across multiple environments (development, multiple staging environments, UAT) became a significant challenge. We needed a way to <strong>quickly check the health of services</strong> without relying on constant, real-time monitoring. Instead, we were looking for a <strong>simple, on-demand health check</strong> solution that allowed our team to manually verify the status of services at key moments, such as before or after deployments, without unnecessary overhead.</p>
<p>This approach would act as a <strong>first quality gate</strong>—a quick check to ensure that all critical services are running as expected before moving on to more rigorous testing or deployments. We didn't need a complex, continuous monitoring tool with alerting, but rather an easy way to <strong>manually trigger health checks</strong> for services when necessary. This is especially useful in scenarios like:</p>
<ul>
<li><p><strong>Pre-deployment checks</strong> to confirm that services are functioning properly before pushing updates.</p>
</li>
<li><p><strong>Post-deployment checks</strong> to verify that new releases haven't broken any dependencies or services.</p>
</li>
<li><p><strong>On-demand troubleshooting</strong> when we see issues with the application, allow us to quickly check the availability of services.</p>
</li>
</ul>
<p>After exploring various solutions, we found that many enterprise-grade tools such as <strong>Dynatrace</strong> or <strong>Prometheus</strong> were too complex and offered features we didn’t need, like constant uptime monitoring and alerts. What we needed was a <strong>simpler, open-source solution</strong> to display API statuses when requested.</p>
<p>That’s when we came across the <strong>API Health Checker Dashboard</strong>. This tool provided exactly what we were looking for:</p>
<ul>
<li><p><strong>On-demand health checks</strong> that allow us to see the current status of our services whenever we need to, without requiring continuous polling or monitoring.</p>
</li>
<li><p><strong>A centralized view</strong> of all services across environments, making it easy for team members to verify the health of critical services before proceeding with further tasks.</p>
</li>
</ul>
<p>Let’s have a detailed breakdown of how this tool works under the hood, its key features, and why it stands out in comparison to other monitoring solutions.</p>
<h3 id="heading-key-features-of-api-health-checker-dashboard">Key Features of API Health Checker Dashboard</h3>
<ol>
<li><p><strong>Real-Time Monitoring</strong>: The dashboard continuously monitors the availability of backend services by sending periodic requests to predefined health-check endpoints. It refreshes every "x" seconds (configurable), allowing near-instant visibility into the status of APIs.</p>
</li>
<li><p><strong>Multi-Environment Support</strong>: You can maintain multiple environments such as production, staging, or development by configuring separate JSON files. This ensures that each environment’s health is tracked independently within the same dashboard.</p>
</li>
<li><p><strong>Mobile-Friendly Interface</strong>: The dashboard's UI is responsive and can be accessed on both mobile and web browsers.</p>
</li>
<li><p><strong>No CORS Issues</strong>: The tool includes a proxy layer to prevent <strong>CORS</strong> (Cross-Origin Resource Sharing) issues, ensuring smooth communication between different services across domains.</p>
</li>
<li><p><strong>Demo Page Available</strong>: A live demo is hosted <a target="_blank" href="https://osandadeshan-api-health-checker-dashboard.glitch.me/">here</a>, where you can interact with a sample dashboard to see the tool in action.</p>
</li>
</ol>
<h3 id="heading-how-it-works-technical-breakdown">How It Works - Technical Breakdown</h3>
<p>The <strong>API Health Checker Dashboard</strong> is built using <strong>Node.js</strong> and primarily operates by making <strong>HTTP requests</strong> to the health-check endpoints of the services you wish to monitor. Here’s a step-by-step breakdown of how it functions internally:</p>
<p><img src="https://user-images.githubusercontent.com/9147189/135319309-3a8eda05-dc29-4df0-be03-5b921b17a822.PNG" alt /></p>
<ol>
<li><p><strong>Polling Mechanism</strong>:</p>
<ul>
<li><p>The system makes use of a polling mechanism, sending periodic requests to each service’s health-check endpoint. The interval for these requests can be set in the configuration file (default is every 30 seconds).</p>
</li>
<li><p>The system expects the endpoints to return an HTTP status code. If the status is in the 200-range, the service is considered “<strong>available</strong>.” Any other status code, or a timeout, marks the service as “<strong>unavailable</strong>.”</p>
</li>
</ul>
</li>
<li><p><strong>Service Configuration</strong>:</p>
<ul>
<li><p>Services to be monitored are defined in JSON configuration files located in the <code>config</code> directory. This is where you list your backend services’ health-check URLs.</p>
</li>
<li><p>For each service, you can provide:</p>
<ul>
<li><p><strong>Name</strong> of the service (for display on the dashboard)</p>
</li>
<li><p><strong>Health-check endpoint</strong> (URL)</p>
</li>
<li><p><strong>Environment, Description</strong> and <strong>other details</strong> as below</p>
<pre><code class="lang-json">  <span class="hljs-comment">//config/qa-config.json</span>
  [
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Employee Service"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"This service will be using for all the employee management functions."</span>,
      <span class="hljs-attr">"id"</span>: <span class="hljs-string">"employee"</span>,
      <span class="hljs-attr">"environment"</span>: <span class="hljs-string">"qa"</span>,
      <span class="hljs-attr">"url"</span>: <span class="hljs-string">"http://dummy.restapiexample.com/api/v1/employees"</span>,
      <span class="hljs-attr">"contact"</span>: <span class="hljs-string">"osanda@maxsoft.com"</span>
    },
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"ToDos Service"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"This service will be using for all the todo management functions."</span>,
      <span class="hljs-attr">"id"</span>: <span class="hljs-string">"todo"</span>,
      <span class="hljs-attr">"environment"</span>: <span class="hljs-string">"qa"</span>,
      <span class="hljs-attr">"url"</span>: <span class="hljs-string">"https://jsonplaceholder.typicode.com/todos/1"</span>,
      <span class="hljs-attr">"contact"</span>: <span class="hljs-string">"osanda@maxsoft.com"</span>
    }
  ]
</code></pre>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Frontend Interface</strong>:</p>
<ul>
<li><p>The user interface is built using standard web technologies—HTML, CSS, and JavaScript—allowing it to be highly customizable. It provides a dashboard view where all services are listed, showing their real-time status.</p>
</li>
<li><p>Each service is displayed as a tile with either a green or red status indicator, representing "healthy" or "unavailable" services. The status updates are fetched using AJAX calls, allowing for real-time updates without the need to refresh the page.</p>
</li>
</ul>
</li>
<li><p><strong>Handling CORS Issues</strong>:</p>
<ul>
<li>Many API health-check tools face <strong>CORS</strong> issues when making requests across different domains. To overcome this, the <strong>API Health Checker Dashboard</strong> includes a <strong>proxy layer</strong>. This proxy acts as an intermediary between the dashboard and the health-check services, routing requests in such a way that bypasses the CORS restrictions.</li>
</ul>
</li>
<li><p><strong>Real-Time Data Flow</strong>:</p>
<ul>
<li><p>The <strong>Node.js</strong> backend sends requests to the specified endpoints and handles the incoming HTTP responses. The results are then passed to the front end, which updates the status tiles accordingly.</p>
</li>
<li><p>The update frequency (i.e., the polling interval) can be adjusted based on your preference by modifying the configuration settings.</p>
</li>
</ul>
</li>
<li><p><strong>Deployment</strong>:</p>
<ul>
<li><p>The tool is designed to be easy to deploy. After cloning the repository, you can install the necessary dependencies using <code>npm install</code>, then run the dashboard with <code>npm run dev</code>. The dashboard will be accessible at <a target="_blank" href="http://localhost:5000"><code>http://localhost:5000</code></a> by default.</p>
</li>
<li><p>You can also deploy this solution to a cloud platform or any server capable of running <strong>Node.js</strong> applications.</p>
</li>
<li><p>Our apps are deployed to the Pivotal Cloud Foundry environment, hence we <code>cf push</code> to deploy the health check app in a respective environment.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-comparing-with-other-similar-solutions">Comparing with Other Similar Solutions</h3>
<ol>
<li><p><strong>Dynatrace</strong></p>
<ul>
<li><p><strong>What it offers</strong>: <a target="_blank" href="https://www.dynatrace.com/solutions/application-monitoring/">Dynatrace</a> is a powerful all-in-one monitoring tool that provides deep observability across applications, infrastructure, and cloud environments. It uses AI to detect anomalies and offers advanced alerting, real-time monitoring, and root-cause analysis.</p>
</li>
<li><p><strong>Why it wasn't the right fit for staging</strong>: While Dynatrace excels in production with its full-scale monitoring and alerting capabilities, it can be overkill for non-production environments. Its complexity and cost don’t justify the overhead for <strong>staging environments</strong>, where we primarily need on-demand health checks rather than continuous monitoring.</p>
</li>
</ul>
</li>
<li><p><strong>Prometheus</strong></p>
<ul>
<li><p><strong>What it offers</strong>: <a target="_blank" href="https://prometheus.io/">Prometheus</a> is an open-source monitoring system designed for capturing metrics, which includes querying and alerting features. It’s highly flexible and customizable, especially when paired with <strong>Grafana</strong> for visualization.</p>
</li>
<li><p><strong>Why we didn’t choose it</strong>: Prometheus requires significant setup and maintenance, and it’s tailored more for capturing time-series data and long-term metrics, which adds unnecessary complexity. We wanted a <strong>lightweight solution</strong> for quickly verifying API health, and setting up Prometheus with custom dashboards and metrics would be overkill for our needs.</p>
</li>
</ul>
</li>
<li><p><strong>Pingdom</strong></p>
<ul>
<li><p><strong>What it offers</strong>: <a target="_blank" href="https://www.pingdom.com/">Pingdom</a> is well-known for uptime monitoring, providing real-time alerts when services go down. It’s widely used for website and API uptime checks and offers easy-to-read dashboards.</p>
</li>
<li><p><strong>Why it wasn’t ideal</strong>: While Pingdom is great for production environments, it comes with subscription costs that aren’t justified for <strong>non-production environments</strong>. Furthermore, its primary focus on uptime and real-time alerts isn't necessary for the <strong>on-demand</strong> checks we need in testing environments.</p>
</li>
</ul>
</li>
<li><p><strong>Atlassian Status Page</strong></p>
<ul>
<li><p><strong>What it offers</strong>: Atlassian’s <a target="_blank" href="https://status.atlassian.com/">Status Page</a> is a communication tool that allows companies to inform users about the operational status of their systems. It’s more focused on <strong>status reporting</strong> and sharing that information with stakeholders, rather than on real-time technical monitoring.</p>
</li>
<li><p><strong>Why we didn’t choose it</strong>: This tool is mainly for communication with external stakeholders about service status rather than an internal tool for <strong>health-check verification</strong>. We needed a tool focused on <strong>internal, quick health checks</strong> that developers &amp; testers could use during stage environment testing.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>Using the <strong>API Health Checker Dashboard</strong> has allowed us to create a unified and reliable view of our API services' health without the need for complex configurations or expensive software. It’s a great fit for teams that require lightweight, real-time API monitoring without the overhead of larger, more complex tools. For those looking for an open-source alternative, I highly recommend trying this solution.</p>
<p>For more technical details and to get started, you can explore the project on <a target="_blank" href="https://github.com/osandadeshan/api-health-checker-dashboard">GitHub</a></p>
]]></content:encoded></item><item><title><![CDATA[Navigating the Storm: Best Practices for Test Leads When a Bug is Found in Production]]></title><description><![CDATA[Introduction:
Discovering a bug in a production environment is a nightmare scenario for any test lead. It’s stressful, potentially embarrassing, and often leads to immediate scrutiny of the testing team. However, it's crucial to recognize that softwa...]]></description><link>https://blog.rakeshvardan.com/navigating-the-storm-best-practices-for-test-leads-when-a-bug-is-found-in-production</link><guid isPermaLink="true">https://blog.rakeshvardan.com/navigating-the-storm-best-practices-for-test-leads-when-a-bug-is-found-in-production</guid><category><![CDATA[Bug Management]]></category><category><![CDATA[Test Lead Strategies]]></category><category><![CDATA[root cause analysis]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[#Continuous improvement]]></category><category><![CDATA[Crisis Management]]></category><category><![CDATA[Software Testing]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Tue, 08 Oct 2024 05:40:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/_CFv3bntQlQ/upload/a17158dd2f709f64d1bca699759418e1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction:</h2>
<p>Discovering a bug in a production environment is a nightmare scenario for any test lead. It’s stressful, potentially embarrassing, and often leads to immediate scrutiny of the testing team. However, it's crucial to recognize that software testing is inherently complex, and even the most thorough procedures cannot catch every bug. This article provides best practices for test leads to manage such situations professionally and turn them into learning opportunities for future improvements.</p>
<h2 id="heading-1-dont-panic-stay-calm-and-focused">1. Don’t Panic: Stay Calm and Focused</h2>
<p>The first and most important step when a bug is found in production is to remain calm. Panicking will not solve the problem and could cloud your judgment, leading to hasty decisions. Stay composed, assess the situation, and remember that it’s an opportunity to show how resilient and resourceful your team can be in crisis management.</p>
<ul>
<li><strong>Pro Tip</strong>: Take a moment to assess your emotional state and ensure your communication is clear and collected. Leaders set the tone for the team.</li>
</ul>
<h2 id="heading-2-gather-all-necessary-information">2. Gather All Necessary Information</h2>
<p>Before rushing to conclusions or taking any immediate action, gather detailed information about the bug. This includes identifying:</p>
<ul>
<li><p>The nature of the bug (e.g., what is causing it?)</p>
</li>
<li><p>The system components affected</p>
</li>
<li><p>The severity and impact on users (how critical is it?)</p>
</li>
</ul>
<p>Understanding the bug’s scope allows for proper prioritization and ensures that the fix addresses the core issue without introducing new ones.</p>
<ul>
<li><strong>Actionable Tip</strong>: Create a standardized checklist for bug discovery in production that your team can quickly fill out in these scenarios.</li>
</ul>
<h2 id="heading-3-communicate-effectively">3. Communicate Effectively</h2>
<p>Communication during such incidents is critical. Inform all relevant stakeholders—developers, product owners, support teams, and clients if necessary—about the issue as soon as it is identified. Be transparent and provide frequent updates on the investigation and resolution progress.</p>
<ul>
<li><p><strong>Effective Communication Plan</strong>:</p>
<ul>
<li><p>Send an initial notification about the bug, including its severity and potential impact.</p>
</li>
<li><p>Assign a point of contact for all updates and progress tracking.</p>
</li>
<li><p>Schedule regular updates, even if no significant progress is made.</p>
</li>
</ul>
</li>
</ul>
<p>Maintaining transparency prevents speculation and ensures everyone is on the same page.</p>
<h2 id="heading-4-conduct-a-root-cause-analysis-rca">4. Conduct a Root Cause Analysis (RCA)</h2>
<p>Once the immediate crisis has been mitigated (e.g., the bug is patched or a workaround is in place), conduct a thorough Root Cause Analysis (RCA) to determine why the bug was missed during testing. Several methodologies can help pinpoint the cause:</p>
<ul>
<li><p><strong>The 5 Whys</strong>: Ask “why” repeatedly until you reach the fundamental reason behind the bug’s occurrence.</p>
<ul>
<li><p><a target="_blank" href="https://www.lean.org/lexicon-terms/5-whys/">https://www.lean.org/lexicon-terms/5-whys/</a></p>
</li>
<li><p><a target="_blank" href="https://www.mindtools.com/a3mi00v/5-whys">https://www.mindtools.com/a3mi00v/5-whys</a></p>
</li>
</ul>
</li>
<li><p><strong>Fishbone Diagram</strong>: Visualize the different factors contributing to the issue, such as testing gaps, communication failures, or overlooked requirements.</p>
<ul>
<li><a target="_blank" href="https://asq.org/quality-resources/fishbone">https://asq.org/quality-resources/fishbone</a></li>
</ul>
</li>
<li><p><strong>Fault Tree Analysis</strong>: Break down the problem into different potential causes, including human error or technical failures.</p>
<ul>
<li><a target="_blank" href="https://fiixsoftware.com/glossary/fault-tree-analysis/">https://fiixsoftware.com/glossary/fault-tree-analysis/</a></li>
</ul>
</li>
</ul>
<p>    <strong>Example RCA</strong>: Let’s say an application crashes when users access a specific feature. Using the 5 Whys technique might reveal that:</p>
<ul>
<li><p>The crash was caused by a feature overload.</p>
</li>
<li><p>The feature wasn’t optimized for large data volumes.</p>
</li>
<li><p>Testing didn’t include large data volumes.</p>
</li>
<li><p>The testing team wasn’t informed about this requirement due to a communication gap.</p>
</li>
</ul>
<p>    <strong>Root Cause</strong>: Requirements were not clearly communicated.</p>
<h2 id="heading-5-implement-corrective-actions">5. Implement Corrective Actions</h2>
<p>Based on the RCA, take corrective actions to avoid future production issues. This could involve:</p>
<ul>
<li><p>Updating test cases to ensure better coverage.</p>
</li>
<li><p>Improving communication between product and testing teams.</p>
</li>
<li><p>Enhancing the bug-tracking process to include new scenarios or edge cases.</p>
</li>
<li><p>Providing additional training to team members on identifying high-risk areas.</p>
</li>
</ul>
<p>By doing so, you’ll turn this production bug into a valuable learning opportunity that strengthens your process moving forward.</p>
<h2 id="heading-6-learn-from-the-experience">6. Learn from the Experience</h2>
<p>Every bug is a chance to improve. Treat it as an opportunity to analyze and refine your team’s processes. Document lessons learned from both the technical and managerial perspectives to prevent future issues. Additionally, encourage knowledge sharing within the team, so everyone understands what went wrong and how it can be prevented in the future.</p>
<ul>
<li><strong>Suggested Approach</strong>: Conduct a postmortem meeting with all stakeholders and publish a summary report that outlines the lessons learned, corrective measures, and updated procedures.</li>
</ul>
<h2 id="heading-7-foster-a-culture-of-shared-responsibility">7. Foster a Culture of Shared Responsibility</h2>
<p>Finally, it's important to cultivate a culture where quality is a shared responsibility. Everyone involved in the software development lifecycle—from developers to product managers—has a role in ensuring that the product is bug-free. Foster a blameless culture where the focus is on collaboration, not finger-pointing.</p>
<ul>
<li><strong>Blameless Retrospectives</strong>: Encourage teams to discuss production issues openly without fear of blame, focusing instead on how processes can be improved as a collective.</li>
</ul>
<h2 id="heading-bonus-prevention-strategies-and-tools">Bonus: Prevention Strategies and Tools</h2>
<p>While this article focuses on how to handle bugs once they’re found in production, prevention is always better than cure. Here are some strategies to minimize the chances of bugs making it to production in the first place:</p>
<ul>
<li><p><strong>Automated Testing</strong>: Integrating automated unit and regression testing into your continuous integration pipeline helps catch bugs earlier.</p>
</li>
<li><p><strong>Load Testing</strong>: Ensure that features are tested under the conditions they’ll face in production, including large data volumes or high traffic.</p>
</li>
<li><p><strong>CI/CD Pipelines</strong>: Continuous Integration/Continuous Deployment tools help ensure that code is thoroughly tested before reaching production, reducing the likelihood of bugs.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion:</h2>
<p>Finding a bug in production is not an ideal scenario, but it doesn’t have to be a disaster. By staying calm, gathering information, communicating clearly, and learning from the experience, test leads can turn this situation into an opportunity for growth and improvement. Remember, a great team is not defined by its ability to prevent all bugs but by how effectively it handles them when they occur.</p>
<p>By fostering a culture of shared responsibility, encouraging open communication, and continually refining your processes, you can ensure that future production bugs are minimized and managed with confidence.</p>
]]></content:encoded></item><item><title><![CDATA[Generative AI Kata: How We Won the Challenge?]]></title><description><![CDATA[Recently, some team members and I participated in a Generative AI Kata held across EPAM India. We won the challenge against other teams and received first prize. In this article, I will share details about our experience, the challenge itself, and ho...]]></description><link>https://blog.rakeshvardan.com/generative-ai-kata-how-we-won-the-challenge</link><guid isPermaLink="true">https://blog.rakeshvardan.com/generative-ai-kata-how-we-won-the-challenge</guid><category><![CDATA[generative ai]]></category><category><![CDATA[katas]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[modernization]]></category><category><![CDATA[cloud native]]></category><category><![CDATA[Express]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Fri, 24 May 2024 03:54:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716523555410/681fdc3e-b0be-4a2b-bc0c-21edf761f8e6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, some team members and I participated in a Generative AI Kata held across EPAM India. We won the challenge against other teams and received first prize. In this article, I will share details about our experience, the challenge itself, and how we solved it.</p>
<h3 id="heading-what-is-generative-ai-kata"><strong>What is Generative AI Kata</strong></h3>
<p>"<a target="_blank" href="https://en.wikipedia.org/wiki/Kata">Kata</a>" is a Japanese word often used in martial arts to describe practicing different patterns, either alone or in small groups, to memorize and perfect them.</p>
<p><em>Generative AI Kata is an immersive journey into the realm of artificial intelligence, where you'll delve into the art of creating, training, and fine-tuning AI models to generate novel and creative outputs. It's not just about coding; it's about unleashing the full potential of AI to drive innovation and solve real-world challenges.</em></p>
<h3 id="heading-why-should-anyone-attend-kata">Why should anyone attend Kata</h3>
<p><strong>Deepen our AI Expertise:</strong> Whether you're an experienced AI practitioner or just starting, Generative AI Kata offers a unique opportunity to deepen our understanding and master the latest techniques in AI development.</p>
<p><strong>Hands-on Exploration:</strong> Given an interactive experience like no other! We can dive into hands-on exercises and start an exciting journey of discovery with our peers.</p>
<p><strong>Collaborative Energy:</strong> We can join fellow tech enthusiasts, share ideas, and work together on exciting AI projects. Generative AI Kata is more than just an event; it's about a shared passion for exploring, learning, and growing together.</p>
<p><strong>Stay Ahead of the Curve:</strong> In today's fast-changing tech world, keeping up is essential. Generative AI Kata gives us the knowledge and skills to stay at the cutting edge of AI innovation.</p>
<h3 id="heading-how-it-works">How it works</h3>
<p><strong>Step 1:</strong> The panel explains the problem statement to all participants. Participants can ask any questions or request clarifications.</p>
<p><strong>Step 2:</strong> Participants are divided into teams of 3-5 members.</p>
<p><strong>Step 3:</strong> Teams will go to their discussion rooms and brainstorm different possible solutions.</p>
<p><strong>Step 4:</strong> The team will come up with the best solution and present it to the panel in the open room. They should be ready to explain or answer any questions during this Challenge phase. This process is managed in a round-robin style by a facilitator.</p>
<p><strong>Step 5:</strong> Finally, the jury will discuss and announce the top 2 winning solutions (teams) and give out rewards.</p>
<h3 id="heading-challenge">Challenge</h3>
<p>Below is the problem statement given to all participants during the Kata, along with the expected solution and artifacts that need to be delivered.</p>
<p><strong>Problem Statement:</strong></p>
<p><em>Our QR code generator app, despite its long-standing presence and large user base, is facing issues due to its outdated legacy monolithic application using Winforms. The lack of documentation and turnover in personnel has further complicated the modernization process of the application. This is hindering our market competitiveness and affecting the app's performance, scalability, and maintainability.</em></p>
<p>Below is the current application, developed in WinForms - a .Net-based desktop application.</p>
<p><img src="https://raw.githubusercontent.com/ArdeshirV/QrCodeGeneratorWithLogo/main/QrCodeGeneratorWithLogo/img/OuP.jpg" alt="QR Code Generator with Logo photo" class="image--center mx-auto" /></p>
<p>Major features:</p>
<ul>
<li><p>Generate a QR code for the given URL</p>
</li>
<li><p>Embed the image as a logo in the generated QR code</p>
</li>
<li><p>Provision to download and save the image</p>
</li>
<li><p>Capability to display the logo background with various shapes and colors</p>
</li>
</ul>
<p><em>Source:</em><a target="_blank" href="https://ardeshirv.github.io/QrCodeGeneratorWithLogo/"><em>https://ardeshirv.github.io/QrCodeGeneratorWithLogo/</em></a></p>
<p><strong>Solution Required:</strong></p>
<p><em>The solution involves a strategic approach of static whitebox reverse engineering. This includes analyzing the current state of the application, defining the desired state, and conducting a gap analysis. The key deliverables from this process will be a comprehensive documentation of the current state of the application and a detailed gap analysis report. This will help us enhance the performance, scalability, and maintainability of the app, thereby improving our competitiveness in the market.</em></p>
<p>Also, a major point to note is that we should leverage GenAI to accomplish these tasks. The submitted solution will be evaluated based on the below criteria.</p>
<ul>
<li><p>Prompt Effectiveness &amp; Techniques Used</p>
</li>
<li><p>Less Human Intervention</p>
</li>
<li><p>Context given to LLM</p>
</li>
<li><p>Output Expectations</p>
</li>
<li><p>Design Patterns used</p>
</li>
<li><p>Tech Stack-specific Output</p>
</li>
<li><p>And other documentation</p>
</li>
</ul>
<h3 id="heading-how-we-solved-it">How we solved it</h3>
<p>As planned, each team should consist of 5-6 members with roles such as Architect, Lead Developer, Automation Engineer, Product Owner, and Testing Engineer. However, due to unforeseen circumstances, our Architect and Lead Developer didn't join the session. This left us with just myself (Lead Automation Engineer/SDET) and two other colleagues, both of whom are Senior Automation Engineers. Also, there is no Product Owner assigned to our team!</p>
<p>We have only 2 hours to complete the major task of modernizing the given legacy and monolithic application. Except for myself, the team has limited knowledge of application architecture and development. We accepted the challenge and took on multiple roles to complete it.</p>
<p><strong><em>While the requirement was to use GenAI to generate the solution and documentation, we used EPAM's own</em></strong> <a target="_blank" href="https://epam-rail.com/platform"><strong><em>DIAL</em></strong></a><strong><em>(a tool similar to ChatGPT with access to multiple LLM models from different vendors) as our tool.</em></strong></p>
<ul>
<li><p><strong>Acting as a Business Analyst:</strong></p>
<ul>
<li><p>We began brainstorming on the given application. We used the desktop application as a user to understand its functionality. From this, we identified the necessary features for our new application.</p>
</li>
<li><p>With the necessary functionalities as input, we used prompt engineering to define the high-level requirements for our new application. We included columns such as <em>Requirement ID, Requirement Description</em>, and <em>Priority</em> in the prompt, and the LLM successfully generated the output as we intended.</p>
</li>
<li><p>We even included features for the performance, security, and usability requirements of the application.</p>
</li>
<li><p>After this, we gathered all the data into a table format and created our final requirement document.</p>
</li>
<li><p>The final document we submitted to the Jury is available here on <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/blob/main/Documentation/Requirements.md">GitHub</a> for reference.</p>
</li>
<li><p>The exported prompts from GenAI are available <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/blob/main/Prompts/BusinessAnalyst.json">here</a>. This includes our entire conversation with the AI acting as a business analyst.</p>
</li>
</ul>
</li>
<li><p><strong>Acting as an Architect &amp; Lead Developer</strong></p>
<ul>
<li><p>This was the most challenging part of the entire session for us.</p>
</li>
<li><p>We provided the LLM with all the technical details about the application, the challenges we faced with the monolith and our goals. We aimed to gather information on a new architecture for the app that would be modular, scalable, and efficient using the latest technologies.</p>
</li>
<li><p>Based on the prompts and AI output, we decided to build or migrate the application into different modules, consisting of front-end and back-end layers.</p>
</li>
<li><p>For the backend, we chose <a target="_blank" href="https://expressjs.com/"><em>ExpressJS</em></a> and began with prompts to obtain the necessary backend logic for the application. We used the existing npm library <a target="_blank" href="https://www.npmjs.com/package/qrcode">qrcode</a> and added a single route for generating the image in the <code>app.js</code>.</p>
</li>
<li><p>With the core back-end functionality working well, we focused on building the front-end application as a separate module. Using the prompts, we chose ReactJS as our solution and obtained the necessary code to build a simple web application.</p>
</li>
<li><p>After we integrated and started both apps, we encountered CORS errors, and the application wasn't working. With the help of AI, we resolved the issue by adding a new npm dependency, <a target="_blank" href="https://www.npmjs.com/package/cors"><code>cors</code></a>, and some new configurations for the backend app.</p>
</li>
<li><p>Voila! Now both the frontend and backend apps are integrated successfully, and the application is running smoothly in a browser on our local systems. Finally, we can pass a URL to the QR code generator app, and it generates the image as expected.</p>
</li>
<li><p>We improved the app's appearance and added new features like image download and disabling the URL textbox until the user enters text. All these enhancements were made using prompt engineering via AI.</p>
</li>
<li><p>We even generated the high-level architecture diagram and the README.md file for the repository using only prompts.</p>
</li>
<li><p>Our application architecture is shown below. We used AI-generated code snippets along with <a target="_blank" href="https://www.plantuml.com/">PlantUML</a> to create this diagram.</p>
<p>  <img src="https://github.com/rakesh-vardan/Team_C/raw/main/images/QR-arch.png" alt="Architecture" class="image--center mx-auto" /></p>
</li>
<li><p>The final GitHub code repository we submitted to the Jury is available <a target="_blank" href="https://github.com/rakesh-vardan/Team_C">here</a> for reference. It includes both <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/tree/main/qr-generator-frontend">front-end</a> and <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/tree/main/qr-generator-backend">back-end</a> code.</p>
</li>
<li><p>The exported prompts from GenAI are available <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/blob/main/Prompts/ApplicationDevelopment.json">here</a>. This includes our entire conversation with the AI acting as an Architect and Developer.</p>
</li>
</ul>
</li>
<li><p><strong>Acting as a Test Lead &amp; Test Engineer</strong></p>
<ul>
<li><p>While developing the application, we also used similar prompting techniques to generate the artifacts for testing.</p>
</li>
<li><p>After setting the application context and going through multiple iterations of prompts, we created a detailed Test Strategy document. You can refer to the final document <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/blob/main/Documentation/TestStrategy.md">here</a>.</p>
</li>
<li><p>Developing all the features within the given time frame wasn't possible. So, we focused on some critical features and created an MVP release for the migration. Based on these prioritized features, we created a test plan to execute the testing activities and validate the application.</p>
</li>
<li><p>Initially, the generated content was generic, but we explicitly changed our prompts to include the necessary information for some sections. As a result, our final <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/blob/main/Documentation/TestPlan.md">Test Plan</a> is ready.</p>
</li>
<li><p>Next, we created the required test <a target="_blank" href="https://github.com/rakesh-vardan/Team_C/blob/main/Documentation/TestCases.md">cases</a> to validate the application and compiled them in a table with all the necessary columns.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-demo">Demo</h3>
<p>Once we finished developing our application, we manually validated the scenarios. We didn't have time to prepare a deck with our design and artifacts, so we used the README.md file as our presentation to the jury. We explained our approach to modernizing the application, our different roles, and how we achieved the desired results using Generative AI and prompt engineering. The jury had some questions and clarifications. They also asked us to show the documents we created and the prompts in the DIAL. We explained and demonstrated them to the best of our knowledge.</p>
<p>Here is the application that we developed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716128457418/0474b838-d679-4af4-a836-c95f17e28aa5.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-lessons-learned">Lessons Learned</h3>
<p>Winning the Kata does not mean that I and my team made everything perfect and implemented all requirements. Not at all. Given the 2 hrs time, we needed to make trade-offs and skip certain implementations for the application.</p>
<ul>
<li><p>We missed generating some important documents like Gap Analysis for the given application.</p>
</li>
<li><p>We couldn't implement some of the existing functionalities like adding a logo to the image, shapes, and colors for the logo, etc.</p>
</li>
<li><p>We focused only on the core functionality of generating and downloading the QR image, providing the simplest and most effective solution possible.</p>
</li>
<li><p>It also took some time to get to know each other and plan the activities for the task, as the team members were new and unfamiliar with each other. We needed time to distribute the tasks and understand each other's roles in the session.</p>
</li>
<li><p>Despite these issues, we used the given time to create a minimum viable working product for the jury.</p>
</li>
<li><p>Other teams also worked very hard and gave great presentations. One team used Python for their app and built all the functionality. Another team generated detailed documentation but did not create a working solution. Each team had its own unique experience.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>It was great to be part of this Kata and collaborate with others to focus on a common goal, which helped us to expand our capabilities. Despite team constraints and limited time, we successfully created a modular, scalable application. We leveraged Generative AI and used prompt engineering, generated documentation, architecture diagrams, and test plans. Our solution focused on core functionality and effective collaboration, leading to our victory.</p>
]]></content:encoded></item><item><title><![CDATA[Chaos Engineering: A Comparative Review and Analysis of Tools]]></title><description><![CDATA[Introduction:
Chaos Engineering has emerged as a critical discipline in the world of software development, helping teams build more resilient and robust systems. Several tools have been developed to facilitate chaos engineering practices. This articl...]]></description><link>https://blog.rakeshvardan.com/chaos-engineering-a-comparative-review-and-analysis-of-tools</link><guid isPermaLink="true">https://blog.rakeshvardan.com/chaos-engineering-a-comparative-review-and-analysis-of-tools</guid><category><![CDATA[SystemResiliency]]></category><category><![CDATA[Chaos Engineering]]></category><category><![CDATA[fault tolerance]]></category><category><![CDATA[Disaster recovery]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Microservices]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Mon, 20 May 2024 10:30:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716186482929/42227a40-0f67-4346-81f0-3e73ce106be1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction:</strong></h3>
<p>Chaos Engineering has emerged as a critical discipline in the world of software development, helping teams build more resilient and robust systems. Several tools have been developed to facilitate chaos engineering practices. This article provides a comparative analysis of some of the most popular chaos engineering tools available today.</p>
<h3 id="heading-1-chaos-monkey"><strong>1. Chaos Monkey:</strong></h3>
<p><a target="_blank" href="https://netflix.github.io/chaosmonkey/">Chaos Monkey</a> is a resiliency tool developed and used by Netflix. It follows the principles of Chaos Engineering by randomly terminating instances in production to ensure that engineers implement their services to be resilient to instance failures. </p>
<p>Chaos Monkey is designed to work on the Amazon Web Services (AWS) platform. It operates by randomly selecting a target from a specified group and terminating it. This random termination simulates potential real-world issues and allows developers to proactively identify and fix weaknesses in their systems. Chaos Monkey is fully integrated with Spinnaker, the continuous delivery platform that Netflix uses. It should work with any backend that Spinnaker supports (AWS, Google Compute Engine, Azure, Kubernetes, Cloud Foundry). It has been tested with AWS, GCE, and Kubernetes.</p>
<p>The primary advantage of Chaos Monkey is its maturity and wide adoption. Being one of the first tools in the field of Chaos Engineering, it has been thoroughly tested and refined. It's backed by Netflix, which gives it a strong support base.</p>
<p>However, Chaos Monkey's scope is primarily limited to AWS. While it can be a powerful tool for teams operating in an AWS environment, those using other platforms may find its functionality limited. Additionally, compared to some other Chaos Engineering tools, it may not offer as wide a range of chaos experiments.</p>
<h3 id="heading-2-gremlin"><strong>2. Gremlin:</strong></h3>
<p><a target="_blank" href="https://www.gremlin.com/chaos-engineering">Gremlin</a> is a Chaos Engineering platform developed by Gremlin Inc. It provides a fully hosted service that allows engineers to safely, securely, and easily simulate chaos to test how their systems handle failure scenarios.</p>
<p>Gremlin offers a wide range of attack vectors, including resource attacks (like CPU, memory, disk, and IO), state attacks (like shutdown and time travel), and network attacks (like latency, packet loss, and DNS). This makes it a comprehensive tool for a variety of chaos experiments.</p>
<p>One of the key advantages of Gremlin is its user-friendly interface, which makes it easy to design and run chaos experiments. It also provides strong support and detailed documentation, making it accessible for both beginners and experienced practitioners of Chaos Engineering.</p>
<p>However, Gremlin is not open-source, and its pricing can be a barrier for small teams or individual users. It's also a more complex tool, which may be more than some users need for simple chaos experiments.</p>
<h3 id="heading-3-chaos-toolkit"><strong>3. Chaos Toolkit:</strong></h3>
<p><a target="_blank" href="https://chaostoolkit.org/">Chaos Toolkit</a> is an open-source Chaos Engineering tool developed by ChaosIQ (now Reliably). It's designed to be a simple, easy-to-use, and extendable tool for running chaos experiments. </p>
<p>One of the key features of the Chaos Toolkit is its extendability. It provides a simple core that you can extend with plugins to support a wide range of platforms and technologies. This makes it a versatile tool that can be adapted to many different environments.</p>
<p>Chaos Toolkit also emphasizes simplicity and ease of use. It uses a declarative JSON or YAML format for defining chaos experiments, making it easy to understand and control what your experiments will do.</p>
<p>However, the Chaos Toolkit is less mature compared to some other Chaos Engineering tools, and it has a limited set of built-in attacks. It's also a more low-level tool, which means it may require more setup and configuration compared to a fully hosted service like Gremlin.</p>
<h3 id="heading-4-litmus"><strong>4. Litmus:</strong></h3>
<p><a target="_blank" href="https://litmuschaos.io/">Litmus</a> is an open-source Chaos Engineering tool developed by MayaData and the open-source community. It's a Kubernetes-native tool, meaning it's designed specifically to run chaos experiments in Kubernetes environments.</p>
<p>One of the key features of Litmus is its extensive chaos experiment library. It provides a wide range of pre-defined chaos experiments that you can use to test your Kubernetes applications, including pod failures, node failures, network latency, and more.</p>
<p>Litmus also emphasizes observability and SRE principles. It provides detailed metrics and logs for your chaos experiments, making it easy to understand the impact of the chaos and identify any issues.</p>
<p>However, because Litmus is Kubernetes-native, it's limited to Kubernetes environments. If you're not using Kubernetes, you may find its functionality limited.</p>
<h3 id="heading-5-powerfulseal"><strong>5. PowerfulSeal:</strong></h3>
<p><a target="_blank" href="https://powerfulseal.github.io/powerfulseal/">PowerfulSeal</a> is an open-source Chaos Engineering tool developed by Bloomberg. It's designed to inject failure into your Kubernetes clusters, helping you detect problems as early as possible.</p>
<p>One of the key features of PowerfulSeal is its robustness. It provides a wide range of chaos experiments, including killing pods, draining nodes, and introducing network latency. It also supports both automated and interactive modes, giving you flexibility in how you run your chaos experiments.</p>
<p>PowerfulSeal also emphasizes observability (<em>with Prometheus or Datadog</em>). It provides detailed logs and metrics for your chaos experiments, making it easy to understand the impact of the chaos and identify any issues.</p>
<p>However, PowerfulSeal is less user-friendly compared to some other Chaos Engineering tools. It requires more setup and configuration, and its command-line interface may be intimidating for beginners. It's also limited to Kubernetes environments.</p>
<h3 id="heading-6-chaos-mesh"><strong>6. Chaos Mesh:</strong></h3>
<p><a target="_blank" href="https://chaos-mesh.org/">Chaos Mesh</a> is an open-source Chaos Engineering platform developed by PingCAP. It's designed to orchestrate chaos experiments in Kubernetes environments.</p>
<p>One of the key features of Chaos Mesh is its comprehensive range of fault types. It supports a wide variety of chaos experiments, including pod failures, network failures, I/O failures, and even JVM application failures. This makes it a versatile tool for testing the resilience of your Kubernetes applications.</p>
<p>Chaos Mesh also emphasizes ease of use. It provides a user-friendly dashboard for managing and monitoring your chaos experiments, making it accessible for both beginners and experienced practitioners of Chaos Engineering.</p>
<p>However, like Litmus and PowerfulSeal, Chaos Mesh is Kubernetes-native and is therefore limited to Kubernetes environments.</p>
<h3 id="heading-7-pumba"><strong>7. Pumba:</strong></h3>
<p><a target="_blank" href="https://github.com/alexei-led/pumba">Pumba</a> is a chaos testing and network emulation tool for Docker, developed by Alexei Ledenev. It's designed to introduce chaos and network latency to Docker containers to test their resilience and discover faults.</p>
<p>One of the key features of Pumba is its simplicity and lightweight nature. It operates directly on running Docker containers, making it easy to integrate into any Docker-based environment. It supports a variety of chaos experiments, including stopping, killing, and removing Docker containers, as well as introducing network latency, packet loss, and rate control.</p>
<p>However, Pumba is less mature compared to some other Chaos Engineering tools, and its scope is primarily limited to Docker environments. It's also a more low-level tool, which means it may require more setup and configuration compared to a fully hosted service.</p>
<h3 id="heading-8-toxiproxy"><strong>8. ToxiProxy:</strong></h3>
<p><a target="_blank" href="https://github.com/shopify/toxiproxy">ToxiProxy</a> is a framework for testing network conditions, developed by Shopify. It's designed to simulate different network conditions and failures, allowing you to test how your application handles them.</p>
<p>One of the key features of ToxiProxy is its focus on network conditions. It supports a variety of network failures, including latency, timeouts, and packet loss. This makes it a valuable tool for testing the resilience of your application to network issues.</p>
<p>ToxiProxy operates as a TCP proxy, introducing the specified network conditions between your application and any services it communicates with over the network. This makes it a versatile tool that can be used with any application that communicates over TCP.</p>
<p>However, ToxiProxy's scope is primarily limited to network conditions. If you need to test other types of failures, such as resource exhaustion or system failures, you may need to use it in conjunction with other Chaos Engineering tools.</p>
<p>Below are the high-level decision diagram &amp; comparative table for all the discussed tools for implementing Chaos Engineering.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716183049068/38d432df-97d6-40ef-8dbe-5e908c4bf0f4.png" alt class="image--center mx-auto" /></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Tool</strong></td><td><strong>Developed By</strong></td><td><strong>Key Features</strong></td><td><strong>Limitations</strong></td><td><strong>Idea For</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Chaos Monkey</strong></td><td>Netflix</td><td>Mature, wide adoption, integrated with Spinnaker, AWS-focused</td><td>Limited to AWS, less range of chaos experiments</td><td>AWS environments</td></tr>
<tr>
<td><strong>Gremlin</strong></td><td>Gremlin Inc.</td><td>User-friendly interface, a wide range of attack vectors, strong support and detailed documentation</td><td>Not open-source, pricing can be a barrier for small teams</td><td>Both beginners and experienced practitioners</td></tr>
<tr>
<td><strong>Chaos Toolkit</strong></td><td>ChaosIQ (now Reliably)</td><td>Open-source, extendable with plugins, simple core, easy to use</td><td>Less mature, limited set of built-in attacks, requires more setup and configuration</td><td>Users who need a simple, extendable tool</td></tr>
<tr>
<td><strong>Litmus</strong></td><td>MayaData and the open-source community</td><td>Kubernetes-native, extensive chaos experiment library, strong observability</td><td>Limited to Kubernetes environments</td><td>Kubernetes environments</td></tr>
<tr>
<td><strong>PowerfulSeal</strong></td><td>Bloomberg</td><td>Robust, wide range of chaos experiments, supports automated and interactive modes</td><td>Less user-friendly, requires more setup and configuration, limited to Kubernetes environments</td><td>Kubernetes environments</td></tr>
<tr>
<td><strong>Chaos Mesh</strong></td><td>PingCAP</td><td>Kubernetes-native, comprehensive range of fault types, user-friendly dashboard</td><td>Limited to Kubernetes environments</td><td>Kubernetes environments</td></tr>
<tr>
<td><strong>Pumba</strong></td><td>Alexei Ledenev</td><td>Simple, lightweight, operates directly on Docker containers</td><td>Less mature, limited to Docker environments</td><td>Docker environments</td></tr>
<tr>
<td><strong>ToxiProxy</strong></td><td>Shopify</td><td>Focus on network conditions, operates as a TCP proxy</td><td>Primarily limited to network conditions</td><td>Applications that communicate over TCP</td></tr>
</tbody>
</table>
</div><h3 id="heading-conclusion"><strong>Conclusion:</strong></h3>
<p>Each of these tools has its strengths and weaknesses, and the best one for your team depends on your specific needs and environment. By understanding the capabilities of each tool, you can make an informed decision and embrace chaos to build more resilient systems.</p>
<p><code>Originally published at</code> <a target="_blank" href="https://wearecommunity.io/communities/india-devtestsecops-community/articles/5010"><code>Wearecommunity-india-devtestsecops-community</code></a></p>
]]></content:encoded></item><item><title><![CDATA[Understanding and Using 'Try-With-Resources' in Java]]></title><description><![CDATA[Introduction
"Are you tired of dealing with messy finally blocks just to ensure your resources get closed properly?"
"Have you ever found yourself tangled in a web of nested try-catch blocks, only to realize that your application is still prone to re...]]></description><link>https://blog.rakeshvardan.com/understanding-and-using-try-with-resources-in-java</link><guid isPermaLink="true">https://blog.rakeshvardan.com/understanding-and-using-try-with-resources-in-java</guid><category><![CDATA[Java]]></category><category><![CDATA[TestAutomation]]></category><category><![CDATA[exceptionhandling]]></category><category><![CDATA[Code Quality]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Tue, 23 Apr 2024 10:30:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713862766587/dd0687ec-5945-4d3e-90ef-71acd63dbb30.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p><em>"Are you tired of dealing with messy</em> <code>finally</code> <em>blocks just to ensure your resources get closed properly?"</em></p>
<p><em>"Have you ever found yourself tangled in a web of nested</em> <code>try-catch</code> <em>blocks, only to realize that your application is still prone to resource leaks?"</em></p>
<p>If so, you're not alone. Managing resources effectively is a common challenge for many Java programmers, including a surprising number of automation engineers I've interviewed. Despite being introduced in Java 7, the <code>try-with-resources</code> statement, a powerful feature that can greatly simplify resource management, remains underutilized in many areas, including test automation. Many engineers continue to use the traditional approach, missing out on the benefits of safer, more readable code.</p>
<p>In this blog post, we will delve into the <code>try-with-resources</code> statement, exploring how it works and how we can use it to improve the Java code. So, let's get started and say goodbye to those messy <code>finally</code> blocks!</p>
<h3 id="heading-understanding-try-with-resources">Understanding Try-With-Resources</h3>
<p><code>try-with-resources</code> is a try statement that declares one or more resource declarations. An object that needs to be closed once the application has finished using it is called a resource. Files, network connections, and database connections are a few types of resources. <strong><em>The resources declared need to implement the</em></strong> <a target="_blank" href="https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.html"><strong><em>Closeable</em></strong></a> <strong><em>or</em></strong> <a target="_blank" href="https://docs.oracle.com/javase/8/docs/api/java/lang/AutoCloseable.html"><strong><em>AutoCloseable</em></strong></a> <strong><em>interfaces.</em></strong></p>
<p>An expression that uses <code>try-with-resources</code> declares the resource within the try statement. Upon completion of the <code>try</code> block, the resource is closed automatically. Compared to the conventional method, which required manually closing the resource in a <code>finally</code> block, this is a huge improvement.</p>
<p><strong>Benefits:</strong></p>
<p>The 'try-with-resources' statement has several benefits:</p>
<ol>
<li><p><strong>Simpler Code:</strong> We no longer need to write explicit code to close the resource in a <code>finally</code> block. This makes the code shorter and easier to read.</p>
</li>
<li><p><strong>Better Resource Management:</strong> The <code>try-with-resources</code> statement ensures that the resource is closed promptly after the program is done using it. This can help prevent resource leaks that can cause the program to behave unpredictably.</p>
</li>
<li><p><strong>Improved Exception Handling:</strong> If an exception is thrown both in the try block and when the resource is closed, the <code>try-with-resources</code> statement will suppress the exception thrown from the <code>try</code> block. This makes the exception-handling code more straightforward.</p>
</li>
</ol>
<p><code>All the examples discussed here can be found on</code><a target="_blank" href="https://github.com/rakesh-vardan/java-examples/tree/main/src/main/java/io/learning/try_with_resources"><code>GitHub</code></a></p>
<p><strong>Example:</strong></p>
<p>Let us understand the usage with an example. To read a file using <code>Scanner</code> class below is the code snippet we write using the traditional <code>try-catch-finally</code> syntax.</p>
<pre><code class="lang-java">    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">methodWithTryCatchFinally</span><span class="hljs-params">()</span> </span>{
        Scanner scanner = <span class="hljs-keyword">null</span>;
        <span class="hljs-keyword">try</span> {
            scanner = <span class="hljs-keyword">new</span> Scanner(<span class="hljs-keyword">new</span> File(<span class="hljs-string">"input.txt"</span>));
            <span class="hljs-keyword">while</span> (scanner.hasNextLine()) {
                System.out.println(scanner.nextLine());
            }
        } <span class="hljs-keyword">catch</span> (FileNotFoundException e) {
            e.printStackTrace();
        } <span class="hljs-keyword">finally</span> {
            <span class="hljs-keyword">if</span> (scanner != <span class="hljs-keyword">null</span>) {
                scanner.close();
            }
        }
    }
</code></pre>
<p>This method reads and prints the contents of a file named "input.txt". It uses a <code>Scanner</code> object to read the file line by line. The <code>try</code> block attempts to open the file and read its contents. If the file is not found, a <code>FileNotFoundException</code> is caught and its stack trace is printed. <strong><em>Regardless of whether an exception is thrown, the</em></strong> <code>finally</code> <strong><em>block ensures that the</em></strong> <code>Scanner</code> <strong><em>object is closed to prevent resource leaks.</em></strong></p>
<p>Let us implement the same logic using <code>try-with-resources</code> syntax</p>
<pre><code class="lang-java">    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">methodWithTryWithResources</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">try</span> (Scanner scanner = <span class="hljs-keyword">new</span> Scanner(<span class="hljs-keyword">new</span> File(<span class="hljs-string">"input.txt"</span>))) {
            <span class="hljs-keyword">while</span> (scanner.hasNextLine()) {
                System.out.println(scanner.nextLine());
            }
        } <span class="hljs-keyword">catch</span> (FileNotFoundException e) {
            e.printStackTrace();
        }
    }
</code></pre>
<p>Here the <code>try</code> block, with the <code>Scanner</code> declaration within its parentheses is the <code>try-with-resources</code> section. Using this syntax automatically closes the resources declared within the parentheses when the try block is exited, either normally or via an exception. <strong><em>This ensures that the</em></strong> <code>Scanner</code> <strong><em>object is closed to prevent resource leaks, without needing an explicit</em></strong> <code>finally</code> <strong><em>block</em>.</strong> If the file is not found, a <code>FileNotFoundException</code> is caught and its stack trace is printed.</p>
<h3 id="heading-working-with-multiple-files">Working with Multiple Files:</h3>
<p>We can also use this syntax to declare and initialize multiple resources with <code>try-with-resources</code></p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">methodWithTryCatchFinally</span><span class="hljs-params">()</span> </span>{
        FileInputStream fis = <span class="hljs-keyword">null</span>;
        FileOutputStream fos = <span class="hljs-keyword">null</span>;
        <span class="hljs-keyword">try</span> {
            fis = <span class="hljs-keyword">new</span> FileInputStream(<span class="hljs-string">"input.txt"</span>);
            fos = <span class="hljs-keyword">new</span> FileOutputStream(<span class="hljs-string">"output.txt"</span>);
            <span class="hljs-keyword">int</span> data;
            <span class="hljs-keyword">while</span> ((data = fis.read()) != -<span class="hljs-number">1</span>) {
                fos.write(data);
            }
        } <span class="hljs-keyword">catch</span> (IOException e) {
            e.printStackTrace();
        } <span class="hljs-keyword">finally</span> {
            <span class="hljs-keyword">if</span> (fis != <span class="hljs-keyword">null</span>) {
                <span class="hljs-keyword">try</span> {
                    fis.close();
                } <span class="hljs-keyword">catch</span> (IOException e) {
                    e.printStackTrace();
                }
            }
            <span class="hljs-keyword">if</span> (fos != <span class="hljs-keyword">null</span>) {
                <span class="hljs-keyword">try</span> {
                    fos.close();
                } <span class="hljs-keyword">catch</span> (IOException e) {
                    e.printStackTrace();
                }
            }
        }
    }
</code></pre>
<p>This example is also similar to the previous one, but now we work with 2 files in the try-with-resources section. The code reads data from a file named "input.txt" and writes it to another file named "output.txt". It uses <code>FileInputStream</code> to read the input file and <code>FileOutputStream</code> to write to the output file. The <code>try</code> block attempts to open both files, read data from the input file, and write it to the output file. If an <code>IOException</code> occurs during this process (for example, if one of the files does not exist or cannot be opened), the exception is caught and its stack trace is printed. <strong><em>The</em></strong> <code>finally</code> <strong><em>block ensures that both the</em></strong> <code>FileInputStream</code> <strong><em>and</em></strong> <code>FileOutputStream</code> <strong><em>are closed, regardless of whether an exception occurred. This is important to prevent resource leaks</em></strong>. If an <code>IOException</code> occurs while trying to close the files, it is also caught and its stack trace is printed.</p>
<p>Now, let us change it to a new syntax.</p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">methodWithTryWithResources</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">try</span> (FileInputStream fis = <span class="hljs-keyword">new</span> FileInputStream(<span class="hljs-string">"input.txt"</span>);
             FileOutputStream fos = <span class="hljs-keyword">new</span> FileOutputStream(<span class="hljs-string">"output.txt"</span>)) {
            <span class="hljs-keyword">int</span> data;
            <span class="hljs-keyword">while</span> ((data = fis.read()) != -<span class="hljs-number">1</span>) {
                fos.write(data);
            }
        } <span class="hljs-keyword">catch</span> (IOException e) {
            e.printStackTrace();
        }
    }
</code></pre>
<p>As you see the same logic has been written concisely using <code>try-with-resources</code>. It improves the readability of the program and reduces the complexity &amp; redundant code for resource clean-up. <strong><em>The</em></strong> <code>try-with-resources</code> <strong><em>automatically closes the</em></strong> <code>FileInputStream</code> <strong><em>and</em></strong> <code>FileOutputStream</code> <strong><em>resources after use, preventing resource leaks.</em></strong> If an <code>IOException</code> occurs (like a file not found), it's caught and its stack trace is printed.</p>
<h3 id="heading-working-with-database-connections">Working with Database Connections:</h3>
<p>One more real-time example in test automation is, connecting to the DB and fetching the test data required for our test scripts. Here is how we write the code for this using the traditional approach.</p>
<pre><code class="lang-java">    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">methodWithTryCatchFinally</span><span class="hljs-params">()</span> </span>{
        Connection conn = <span class="hljs-keyword">null</span>;
        Statement stmt = <span class="hljs-keyword">null</span>;
        <span class="hljs-keyword">try</span> {
            conn = DriverManager.getConnection(<span class="hljs-string">"jdbc:postgresql://localhost:5432/postgres"</span>,
                    <span class="hljs-string">"postgres"</span>, <span class="hljs-string">"password"</span>);
            stmt = conn.createStatement();
            ResultSet rs = stmt.executeQuery(<span class="hljs-string">"SELECT * FROM users"</span>);
            <span class="hljs-keyword">while</span> (rs.next()) {
                <span class="hljs-comment">// process the row</span>
            }
        } <span class="hljs-keyword">catch</span> (SQLException e) {
            e.printStackTrace();
        } <span class="hljs-keyword">finally</span> {
            <span class="hljs-keyword">if</span> (stmt != <span class="hljs-keyword">null</span>) {
                <span class="hljs-keyword">try</span> {
                    stmt.close();
                } <span class="hljs-keyword">catch</span> (SQLException e) {
                    e.printStackTrace();
                }
            }
            <span class="hljs-keyword">if</span> (conn != <span class="hljs-keyword">null</span>) {
                <span class="hljs-keyword">try</span> {
                    conn.close();
                } <span class="hljs-keyword">catch</span> (SQLException e) {
                    e.printStackTrace();
                }
            }
        }
    }
</code></pre>
<p>Here we are using native classes from <code>java.sql</code> JDBC classes to connect to a PostgreSQL database, execute a SQL query, and process the results. It uses a <code>Connection</code> object to establish a connection to the database and a <code>Statement</code> object to execute the query. The <code>try</code> block attempts to establish the connection, execute the query "SELECT * FROM users", and process each row of the result. If a <code>SQLException</code> occurs during this process (for example, if the database connection fails or the query is invalid), the exception is caught and its stack trace is printed. <strong><em>The</em></strong> <code>finally</code> <strong><em>block ensures that both the</em></strong> <code>Statement</code> <strong><em>and</em></strong> <code>Connection</code> <strong><em>objects are closed, regardless of whether an exception occurred. If a</em></strong> <code>SQLException</code> <strong><em>occurs while trying to close the objects, it is also caught and its stack trace is printed.</em></strong></p>
<p>Let us convert this logic to use the new syntax.</p>
<pre><code class="lang-java">    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">methodWithTryWithResources</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">try</span> (Connection conn = DriverManager.getConnection(<span class="hljs-string">"jdbc:postgresql://localhost:5432/postgres"</span>,
                <span class="hljs-string">"postgres"</span>, <span class="hljs-string">"password"</span>);
             Statement stmt = conn.createStatement()) {
            ResultSet rs = stmt.executeQuery(<span class="hljs-string">"SELECT * FROM users"</span>);
            <span class="hljs-keyword">while</span> (rs.next()) {
                <span class="hljs-comment">// process the row</span>
            }
        } <span class="hljs-keyword">catch</span> (SQLException e) {
            e.printStackTrace();
        }
    }
</code></pre>
<p>Here we are doing the same logic but using a <code>try-with-resources</code> block to automatically manage the <code>Connection</code> and <code>Statement</code> resources. <strong><em>The</em></strong><code>try-with-resources</code> <strong><em>block automatically closes the</em></strong> <code>Connection</code> <strong><em>and</em></strong> <code>Statement</code> <strong><em>objects after use, preventing resource leaks.</em></strong></p>
<p><code>A try-with-resources block can still have the finally block, which will work in the similar way as with a traditional try block.</code></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In this blog, we discussed how we can leverage <code>try-with-resources</code> instead of using <code>trycatch</code> and <code>finally</code> blocks for exception handling and writing efficient code. If you're not already using <code>try-with-resources</code> in your Java code, consider starting today!</p>
]]></content:encoded></item><item><title><![CDATA[Choosing the Optimal Approach for API Automation]]></title><description><![CDATA[Introduction
As an automation engineer, testing APIs is a significant part of our role. While there are numerous tools available for this purpose, choosing the right one can significantly impact our testing efficiency. In this blog post, we'll explor...]]></description><link>https://blog.rakeshvardan.com/choosing-the-optimal-approach-for-api-automation</link><guid isPermaLink="true">https://blog.rakeshvardan.com/choosing-the-optimal-approach-for-api-automation</guid><category><![CDATA[#APITestAutomation]]></category><category><![CDATA[Rest Assured]]></category><category><![CDATA[TestAutomation]]></category><category><![CDATA[Spring]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Sat, 20 Apr 2024 10:30:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713606126006/37d9f80e-1ffd-4fdf-bb56-8ceb12a12388.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>As an automation engineer, testing APIs is a significant part of our role. While there are numerous tools available for this purpose, choosing the right one can significantly impact our testing efficiency. In this blog post, we'll explore several options, including native clients like <code>HttpClient</code> and <code>HttpURLConnection</code>, as well as <code>REST Assured</code>, <code>Spring RestTemplate</code>, <code>Spring WebClient</code>, and <code>Apache HttpClient</code>.</p>
<p>Let's explore a simple use case and add tests using all the options we have.</p>
<p>Consider our API URL <a target="_blank" href="https://jsonplaceholder.typicode.com/users/1"><code>https://jsonplaceholder.typicode.com/users/1</code></a> which returns the below JSON response.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713589659898/acbd285c-9ab2-40e1-b221-7df17560fbc9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713589933136/37acce18-8a51-4916-8848-3bbe9464cdac.png" alt class="image--center mx-auto" /></p>
<p>So the test is to:</p>
<ul>
<li><p>Invoke the GET API using any client</p>
</li>
<li><p>Get the response &amp; validate status code is <code>200</code></p>
</li>
<li><p>Validate some parts of the response, maybe <code>name</code></p>
</li>
</ul>
<p>Let's see how we can accomplish this with different approaches.</p>
<p><code>The complete code for this tutorial with all examples discussed can be found on</code> <a target="_blank" href="https://github.com/rakesh-vardan/restassured-vs-native-clients"><code>GitHub</code></a><code>.</code></p>
<h3 id="heading-1-httpurlconnection"><strong>1. HttpURLConnection</strong></h3>
<p><a target="_blank" href="https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html"><code>HttpURLConnection</code></a> has been a part of the Java standard library since Java 1.1. It's a blocking API, meaning it will hold the thread until it gets the response. It only supports HTTP/1.1. Its API is also low-level, which means we would need to write more code to do the same tasks as with other options.</p>
<pre><code class="lang-java">    <span class="hljs-meta">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">testWithHttpURLConnection</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> IOException </span>{
        <span class="hljs-comment">// prepare request</span>
        URL url = <span class="hljs-keyword">new</span> URL(<span class="hljs-keyword">this</span>.URL);
        HttpURLConnection connection = (HttpURLConnection) url.openConnection();
        connection.setRequestMethod(<span class="hljs-string">"GET"</span>);

        <span class="hljs-comment">// send request</span>
        connection.connect();

        <span class="hljs-comment">// validate response</span>
        assertEquals(<span class="hljs-number">200</span>, connection.getResponseCode());
        assertEquals(<span class="hljs-string">"application/json; charset=utf-8"</span>, connection
                .getHeaderField(<span class="hljs-string">"Content-Type"</span>));

        BufferedReader reader = <span class="hljs-keyword">new</span> BufferedReader(<span class="hljs-keyword">new</span> InputStreamReader(connection.getInputStream()));
        String line;
        StringBuilder response = <span class="hljs-keyword">new</span> StringBuilder();
        <span class="hljs-keyword">while</span> ((line = reader.readLine()) != <span class="hljs-keyword">null</span>) {
            response.append(line);
        }
        reader.close();

        assertTrue(response.toString().contains(<span class="hljs-string">"Leanne Graham"</span>));
        connection.disconnect();
    }
</code></pre>
<p>This JUnit test uses Java's native <code>HttpURLConnection</code> to send a GET request to the given URL and validates the response. It first creates a new <code>URL</code> object and opens a connection to it using <code>HttpURLConnection</code>. The request method is set to <code>GET</code>. Then, it sends the request by calling <code>connect()</code>. It validates the response by checking that the response code is <code>200</code>, indicating success and that the <code>Content-Type</code> header is <code>application/json; charset=utf-8</code>. It then reads the response body using a <code>BufferedReader</code> and checks that the response body contains the string "Leanne Graham". If any of these checks fail, the test will fail. If all checks pass, the test will pass. The test may throw an IOException if there's a problem sending the request or receiving the response. After all operations, it disconnects the connection by calling <code>disconnect()</code>.</p>
<h3 id="heading-2-httpclient"><strong>2. HttpClient</strong></h3>
<p>Introduced in Java 11, <a target="_blank" href="https://docs.oracle.com/en%2Fjava%2Fjavase%2F11%2Fdocs%2Fapi%2F%2F/java.net.http/java/net/http/HttpClient.html"><code>HttpClient</code></a> is a modern and flexible API that supports both HTTP/1.1 and HTTP/2. It provides both synchronous (blocking) and asynchronous (non-blocking) programming models. However, it's more of a low level and doesn't provide the same level of functionality for testing as some other options.</p>
<pre><code class="lang-java">    <span class="hljs-meta">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">testWithHttpClient</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> IOException, InterruptedException </span>{
        <span class="hljs-comment">// prepare request</span>
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
                .uri(URI.create(<span class="hljs-keyword">this</span>.URL))
                .build();

        <span class="hljs-comment">// send request</span>
        HttpResponse&lt;String&gt; response = client.send(request,
                HttpResponse.BodyHandlers.ofString());

        <span class="hljs-comment">// validate response</span>
        assertEquals(<span class="hljs-number">200</span>, response.statusCode());
        assertEquals(<span class="hljs-string">"application/json; charset=utf-8"</span>, response.headers()
                .firstValue(<span class="hljs-string">"Content-Type"</span>).get());
        assertTrue(response.body().contains(<span class="hljs-string">"Leanne Graham"</span>));
    }
</code></pre>
<p>This test uses Java's native <code>HttpClient</code> to send a GET request to a specified URL and validates the response. It first creates a new <code>HttpClient</code> instance and builds an <code>HttpRequest</code> for the specified URL. Then, it sends the <code>HttpRequest</code> using the <code>HttpClient</code> and receives the <code>HttpResponse</code>, specifying that the response body should be treated as a String. Then it validates the response as per our test scenario.</p>
<h3 id="heading-3-apache-httpclient"><strong>3. Apache HttpClient</strong></h3>
<p><a target="_blank" href="https://hc.apache.org/httpcomponents-client-4.5.x/index.html"><code>Apache HttpClient</code></a> is a robust, feature-rich, and flexible library that provides almost all the functionality needed to send HTTP requests and handle HTTP responses. It supports both blocking and non-blocking I/O models and provides full control over the HTTP protocol's details.</p>
<p><em>Make sure you're adding the appropriate Maven/Gradle dependency for Apache HttpClient in your project. For Maven, you can use:</em></p>
<pre><code class="lang-xml">    <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.apache.httpcomponents.client5<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>httpclient5<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>5.3.1<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
</code></pre>
<pre><code class="lang-java">    <span class="hljs-meta">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">testWithApacheHttpClient</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> ProtocolException, IOException </span>{
        <span class="hljs-comment">// prepare request</span>
        CloseableHttpClient httpClient = HttpClients.createDefault();
        HttpGet request = <span class="hljs-keyword">new</span> HttpGet(<span class="hljs-keyword">this</span>.URL);

        <span class="hljs-comment">// send request</span>
        CloseableHttpResponse response = httpClient.execute(request);

        <span class="hljs-comment">// validate response</span>
        assertEquals(<span class="hljs-number">200</span>, response.getCode());
        assertEquals(<span class="hljs-string">"application/json; charset=utf-8"</span>,
                response.getHeader(<span class="hljs-string">"Content-Type"</span>).getValue());
        assertTrue(EntityUtils.toString(response.getEntity())
                .contains(<span class="hljs-string">"Leanne Graham"</span>));
        httpClient.close();
    }
</code></pre>
<p>This test uses Apache's <code>CloseableHttpClient</code> to send a GET request to a specified URL and validates the response. It first creates a new <code>CloseableHttpClient</code> and builds a <code>HttpGet</code> request for the specified URL. Then, it sends the <code>HttpGet</code> request using the <code>CloseableHttpClient</code> and receives the <code>CloseableHttpResponse</code>. It validates the response similar to the previous examples. After all operations, it closes <code>httpClient</code> to free up system resources.</p>
<h3 id="heading-4-spring-resttemplate"><strong>4. Spring RestTemplate</strong></h3>
<p><a target="_blank" href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html"><code>RestTemplate</code></a> is a synchronous HTTP client that was part of the Spring Framework. It provides a higher-level, more user-friendly API compared to <code>HttpClient</code> and <code>HttpURLConnection</code>. However, as of Spring 5, RestTemplate is in maintenance mode, and it's suggested to use the non-blocking <code>WebClient</code> instead.</p>
<p><em>Make sure you're adding the appropriate Maven/Gradle dependency for Spring Web in your project. For Maven, you can use:</em></p>
<pre><code class="lang-xml">    <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-web<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>6.1.6<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
</code></pre>
<pre><code class="lang-java">    <span class="hljs-meta">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">testWithSpringRestTemplate</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-comment">// prepare and send request</span>
        RestTemplate restTemplate = <span class="hljs-keyword">new</span> RestTemplate();
        ResponseEntity&lt;String&gt; response = restTemplate.getForEntity(<span class="hljs-keyword">this</span>.URL, String.class);

        // <span class="hljs-function">validate response
        <span class="hljs-title">assertEquals</span><span class="hljs-params">(<span class="hljs-number">200</span>, response.getStatusCodeValue()</span>)</span>;
        assertEquals(<span class="hljs-string">"application/json; charset=utf-8"</span>, response.getHeaders()
                .getFirst(<span class="hljs-string">"Content-Type"</span>));
        assertTrue(response.getBody().contains(<span class="hljs-string">"Leanne Graham"</span>));
    }
</code></pre>
<p>This test uses Spring's <code>RestTemplate</code> to send a GET request to a specified URL and validates the response. It first creates a new <code>RestTemplate</code> and sends a GET request to the specified URL, receiving the response as a <code>ResponseEntity&lt;String&gt;</code>. Then it validates the response as per our test scenario.</p>
<h3 id="heading-5-spring-webclient"><strong>5. Spring WebClient</strong></h3>
<p><a target="_blank" href="https://docs.spring.io/spring-framework/reference/web/webflux-webclient.html"><code>WebClient</code></a> is a non-blocking, reactive web client introduced in Spring 5 as part of the WebFlux module. It's designed to work in a non-blocking way and is suitable for use in reactive applications where traditional blocking I/O operations are inefficient.</p>
<p><em>Make sure you're adding the appropriate Maven/Gradle dependency for Spring Webflux in your project. For Maven, you can use:</em></p>
<pre><code class="lang-xml">    <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-webflux<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>6.1.6<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
</code></pre>
<pre><code class="lang-java">    <span class="hljs-meta">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">testWithSpringWebClient</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-comment">// prepare and send request</span>
        WebClient webClient = WebClient.create();
        ClientResponse response = webClient.get()
                .uri(<span class="hljs-keyword">this</span>.URL)
                .exchange()
                .block();

        <span class="hljs-comment">// validate response</span>
        <span class="hljs-keyword">assert</span> response != <span class="hljs-keyword">null</span>;
        assertEquals(HttpStatus.OK, response.statusCode());
        assertEquals(<span class="hljs-string">"application/json;charset=utf-8"</span>,
                Objects.requireNonNull(response.headers().asHttpHeaders().getContentType())
                        .toString());
        assertTrue(Objects.requireNonNull(response.bodyToMono(String.class).block())
                .contains(<span class="hljs-string">"Leanne Graham"</span>));
    }
</code></pre>
<p>This test uses Spring's <code>WebClient</code> to send a GET request to a specified URL and retrieve the response. It creates an instance of <code>WebClient</code>, sends the request, and blocks until the response is received. It validates the response similar to the previous examples. The <code>Objects.requireNonNull()</code> calls are used to ensure that the <code>getContentType()</code> and <code>bodyToMono(String.class).block()</code> methods do not return <code>null</code>. If they do, a <code>NullPointerException</code> will be thrown.</p>
<h3 id="heading-6-rest-assured"><strong>6. REST Assured</strong></h3>
<p><a target="_blank" href="https://rest-assured.io/"><code>REST Assured</code></a> is an open-source Java library that simplifies the testing and validation of REST APIs. It provides a high-level, fluent API for sending HTTP requests and validating responses, making it an ideal tool for testing in a behavior-driven development (BDD) style. It is built on top of <code>Apache HTTP Client</code> for handling HTTP requests and responses and <code>Groovy</code> for its syntax and language features. It also uses other libraries such as <code>GSON</code> and <code>Jackson</code> for JSON, and <code>JAXB</code> for XML.</p>
<p><em>Make sure you're adding the appropriate Maven/Gradle dependency for REST Assured in your project. For Maven, you can use:</em></p>
<pre><code class="lang-xml">    <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>io.rest-assured<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>rest-assured<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>5.4.0<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">scope</span>&gt;</span>test<span class="hljs-tag">&lt;/<span class="hljs-name">scope</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
</code></pre>
<pre><code class="lang-java">    <span class="hljs-meta">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">testWithRESTAssured</span><span class="hljs-params">()</span> </span>{
        given().
                baseUri(<span class="hljs-keyword">this</span>.URL).       <span class="hljs-comment">// prepare request</span>
        when()
                .get().                  <span class="hljs-comment">// send request</span>
        then()
                .statusCode(<span class="hljs-number">200</span>).and()   <span class="hljs-comment">// validate response</span>
                .body(containsString(<span class="hljs-string">"Leanne Graham"</span>)).and()
                .header(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json; charset=utf-8"</span>);
    }
</code></pre>
<p>This test uses REST Assured to send a GET request to a specified URL and validate the response. The <code>baseUri(this.URL)</code> method sets the base URL for the request. The <code>get()</code> method sends the GET request. The <code>statusCode(200)</code> assertion checks that the status code of the response is 200, indicating success. The <code>and()</code> methods are used for readability and do not affect the execution of the test.</p>
<p>As we see with our examples, writing API tests &amp; adding assertions with REST Assured is very easy. It stands out for its ease of use, powerful validation features, support for various types of authentication, and flexibility. It's built on top of the <code>Apache HttpClient</code>, which means we can customize it to suit our needs. We can add filters, define the request and response specifications, and more. Using REST Assured, we can write efficient tests with very few lines of readable code.</p>
<p>REST Assured offers several benefits for testing RESTful APIs:</p>
<ol>
<li><p><strong>BDD Format:</strong> REST Assured supports Behavior Driven Development (BDD) format, making it easier to write, read, and understand tests. This also facilitates communication between developers, testers, and non-technical stakeholders.</p>
</li>
<li><p><strong>DSL:</strong> REST Assured provides a Domain Specific Language (DSL) for writing tests, which simplifies the process of writing complex HTTP requests.</p>
</li>
<li><p><strong>Specification:</strong> It allows you to define detailed specifications for the API, which can be used to generate detailed reports and documentation.</p>
</li>
<li><p><strong>Schema Validation:</strong> REST Assured supports schema validation for both JSON and XML, ensuring that the API responses match the expected structure.</p>
</li>
<li><p><strong>Ease of Use:</strong> REST Assured is designed to be simple and intuitive, making it easy for beginners to get started with API testing.</p>
</li>
<li><p><strong>Integration:</strong> It integrates seamlessly with existing Java-based testing ecosystems, including JUnit and TestNG.</p>
</li>
<li><p><strong>Flexibility:</strong> REST Assured supports a variety of HTTP methods, including GET, POST, PUT, DELETE, OPTIONS, PATCH, and HEAD, and can handle any type of MIME type, providing flexibility in testing different types of APIs.</p>
</li>
<li><p><strong>Authentication Support:</strong> REST Assured supports various types of authentication, such as Basic, Digest, Form, and OAuth, making it easier to test APIs that require user authentication.</p>
</li>
<li><p><strong>Detailed Logging:</strong> REST Assured provides detailed logging capabilities, which can be very helpful for debugging and understanding the flow of requests and responses.</p>
</li>
<li><p><strong>Built-in Support for Hamcrest Matchers:</strong> REST Assured includes built-in support for Hamcrest matchers, which allows for more readable and flexible assertions. This enhances the descriptiveness of our tests, making them easier to read and maintain.</p>
</li>
</ol>
<p>Overall, REST Assured provides a comprehensive and user-friendly framework for testing RESTful APIs, making it a popular choice among developers and testers.</p>
<p>Here's a comparison table for all the mentioned solutions:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature/Tool</td><td>HttpURLConnection</td><td>HttpClient</td><td>Apache HttpClient</td><td>Spring RestTemplate</td><td>Spring WebClient</td><td>REST Assured</td></tr>
</thead>
<tbody>
<tr>
<td>HTTP/2 Support</td><td>No</td><td>Yes</td><td>Yes</td><td>No</td><td>Yes</td><td>Yes</td></tr>
<tr>
<td>Non-Blocking I/O</td><td>No</td><td>Yes</td><td>Yes</td><td>No</td><td>Yes</td><td>No</td></tr>
<tr>
<td>WebSocket Support</td><td>No</td><td>Yes</td><td>No</td><td>No</td><td>Yes</td><td>No</td></tr>
<tr>
<td>OAuth2 Support</td><td>No</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
<tr>
<td>JSON Support</td><td>No</td><td>No</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
<tr>
<td>XML Support</td><td>No</td><td>No</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
<tr>
<td>Fluent API</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr>
<tr>
<td>Built-in Response Validation</td><td>No</td><td>No</td><td>No</td><td>No</td><td>No</td><td>Yes</td></tr>
<tr>
<td>BDD Style Support</td><td>No</td><td>No</td><td>No</td><td>No</td><td>No</td><td>Yes</td></tr>
<tr>
<td>Part of Java Standard Library</td><td>Yes</td><td>Yes</td><td>No</td><td>No</td><td>No</td><td>No</td></tr>
</tbody>
</table>
</div><h3 id="heading-conclusion">Conclusion</h3>
<p>While native clients like <code>HttpClient</code> and <code>HttpURLConnection</code>, as well as libraries like <code>Spring RestTemplate</code>, <code>Spring WebClient</code>, and <code>Apache HttpClient</code> have their specific uses, <code>REST Assured</code> is a superior choice for test automation engineers looking to efficiently test REST APIs. Its high-level, fluent API, powerful validation features, and support for various types of authentication make it a versatile and efficient tool for API test automation. So, if you're an automation engineer looking to streamline your API testing, give REST Assured a try!</p>
]]></content:encoded></item><item><title><![CDATA[Chaos Engineering: Embracing Chaos to Build Resilient Systems]]></title><description><![CDATA[Introduction
In today's rapidly evolving digital landscape, where systems are becoming increasingly complex and interconnected, ensuring the reliability and resilience of software applications is more critical than ever. In this quest for robustness,...]]></description><link>https://blog.rakeshvardan.com/chaos-engineering-embracing-chaos-to-build-resilient-systems</link><guid isPermaLink="true">https://blog.rakeshvardan.com/chaos-engineering-embracing-chaos-to-build-resilient-systems</guid><category><![CDATA[SystemResiliency]]></category><category><![CDATA[Chaos Engineering]]></category><category><![CDATA[fault tolerance]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[Disaster recovery]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Microservices]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Tue, 16 Apr 2024 14:20:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713278216605/ce998442-583b-4b48-bb1e-0a3d6b058267.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>In today's rapidly evolving digital landscape, where systems are becoming increasingly complex and interconnected, ensuring the reliability and resilience of software applications is more critical than ever. In this quest for robustness, a new discipline called "Chaos Engineering" has emerged. Chaos Engineering is not about causing chaos, but rather a proactive approach to testing and strengthening systems by intentionally injecting controlled chaos into them. By embracing chaos, organizations can uncover vulnerabilities, improve their system's reliability, and ultimately deliver better experiences to their users.</p>
<p> The definition of ‘Chaos Engineering’ has been defined by various groups &amp; organizations:</p>
<ul>
<li><p><em>Chaos Engineering is the</em> <strong><em>discipline of experimenting</em></strong> <em>on a system to</em> <strong><em>build confidence</em></strong> <em>in the system’s capability to withstand turbulent conditions in production. -</em> <a target="_blank" href="https://principlesofchaos.org/"><em>principlesofchaos</em></a></p>
</li>
<li><p><em>Chaos Engineering goes beyond traditional (failure) testing in that it's not only about verifying assumptions. It helps us explore the</em> <strong><em>unpredictable</em></strong> <em>things that could happen, and discover new properties of our inherently chaotic systems. -</em> <a target="_blank" href="https://www.gremlin.com/chaos-engineering"><em>Gremlin</em></a></p>
</li>
<li><p><em>Chaos engineering is the science behind</em> <strong><em>intentionally injecting failure</em></strong> <em>into systems to gauge resiliency –</em> <a target="_blank" href="https://www.harness.io/blog/chaos-engineering"><em>Harness</em></a></p>
</li>
</ul>
<h3 id="heading-understanding-chaos-engineering"><strong>Understanding Chaos Engineering</strong></h3>
<p>Chaos Engineering is rooted in the idea that to build resilient systems, it is essential to actively test their behavior under adverse conditions. The approach draws inspiration from real-world experiences where systems fail due to unexpected circumstances, and aims to simulate those scenarios in a controlled environment.</p>
<p><em>Chaos Engineering is like a firefighter running practice drills. By intentionally setting controlled fires, the firefighters can understand how quickly it might spread, how effectively they can eliminate it, and what tactics work best under pressure. Similarly, Chaos Engineering believes in creating troublesome situations for a system to understand how it reacts. Just like the real world, where unexpected things can go wrong, these simulated conditions help find out the weak points in a system. By learning from these experiments, the organization can work on strengthening these weak areas, making the system more resilient and better equipped to deal with actual problems if they arise.</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713275818036/406f73ac-cb23-4870-8474-dff3958188b4.png" alt class="image--center mx-auto" /></p>
<p><em><sup>1</sup> Image generated with Dall-E3</em></p>
<p>Similarly, Chaos Engineering tests systems under less-than-ideal conditions on purpose. It recreates situations where things could go wrong due to unanticipated circumstances. With this, organizations can see the weak points in their systems, and work to make them stronger and better equipped to handle such situations effectively in the future. The controlled environment ensures that while these tests are going on, the normal functioning of the system is not provoked into a real chaotic situation.</p>
<h3 id="heading-chaos-engineering-principles"><strong>Chaos Engineering Principles</strong></h3>
<p>Chaos Engineering is guided by a set of core principles that drive its practice. Let's explore them briefly.</p>
<ul>
<li><p><strong><em>Start with establishing a steady state:</em></strong> This refers to defining normal behavior for your system to understand its expected outcomes better.</p>
</li>
<li><p><strong><em>Hypothesize about what will happen:</em></strong> Chaos experiments begin with a clear hypothesis about how the system will behave under certain chaotic conditions. This hypothesis serves as a guide to identify potential vulnerabilities and expected outcomes.</p>
</li>
<li><p><strong><em>Introduce variables that reflect real-world events:</em></strong> Chaos experiments should mirror factors that could likely occur within your production environment.</p>
</li>
<li><p><strong><em>Attempt to disprove the hypothesis:</em></strong> By introducing controlled disruptions, you can test whether the system works as expected. If not, you will learn about a vulnerability.</p>
</li>
</ul>
<p><strong>Other principles include:</strong></p>
<ul>
<li><p><strong><em>Define measurable outcomes:</em></strong> Chaos experiments should have measurable objectives and success criteria. This ensures that the results can be evaluated objectively and the impact of chaos can be quantified.</p>
</li>
<li><p><strong><em>Focus on steady-state behavior</em>:</strong> Chaos experiments primarily aim to test a system's behavior under normal operating conditions. The goal is to uncover weaknesses that might not surface during regular testing or in controlled environments.</p>
</li>
<li><p><strong><em>Apply blast radius limits:</em></strong> To minimize the potential impact of chaos experiments, it is crucial to define the scope and boundaries. By limiting the blast radius, organizations can control the impact on the system and mitigate potential risks.</p>
</li>
<li><p><strong><em>Automate where possible:</em></strong> Chaos experiments should be automated to enable repeatable, controlled chaos. Automation reduces human error, ensures consistency, and allows for scaling the chaos engineering practice.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713276039227/c99fde99-8ca5-4bd6-8f55-8c4d7b3fb43d.png" alt class="image--center mx-auto" /></p>
<p><em>Source: Apexon</em> <a target="_blank" href="https://www.apexon.com/resources/white-papers/chaos-engineering/"><em>Whitepaper</em></a> <em>on Chaos Engineering</em></p>
<h3 id="heading-benefits-of-chaos-engineering"><strong>Benefits of Chaos Engineering</strong></h3>
<p>Implementing Chaos Engineering as a proactive practice brings several benefits to organizations.</p>
<ul>
<li><p><strong><em>Improved system resilience:</em></strong> Early vulnerability and weak point identification allow companies to make the necessary changes to strengthen their systems. By identifying these weak spots before they become operational problems, chaos engineering contributes to the overall resilience of the system.</p>
</li>
<li><p><strong><em>Reduced downtime and increased availability:</em></strong> Organizations can identify possible failure situations and take the necessary steps to prevent or recover from them by using chaos experiments. Organizations may reduce downtime and guarantee high availability for their clients by proactively identifying these issues.</p>
</li>
<li><p><strong><em>Enhanced customer experience:</em></strong> Chaos Engineering contributes to more streamlined and dependable end-user experiences by proactively testing and fixing system flaws. It assists businesses in identifying any problems and acting upon them before they negatively affect the consumer experience.</p>
</li>
<li><p><strong><em>Cultural shift towards resilience:</em></strong> Through a cultural shift, Chaos Engineering assists organizations in making resilience a priority. It educates groups to embrace difficulties and see flaws as chances for improvement. By promoting a culture of ongoing learning and development, this strategy creates systems that are more reliable and strong.</p>
</li>
</ul>
<p>Let's use an eCommerce website as an example for demonstrating the Chaos Engineering principles. </p>
<p><strong><em>1. Start with Establishing a Steady State:</em></strong> The normal operations of this eCommerce site might involve users browsing items, adding them to a cart, checking out, and making a payment. The performance metrics of these operations like load time, server response time, success rate of transactions, etc., in normal conditions, are recorded.</p>
<p><strong><em>2. Hypothesize About What Will Happen</em>:</strong> Let's hypothesize that if the payment gateway service were to fail, users would still be able to browse items and add them to their carts, but wouldn't be able to complete transactions.</p>
<p> <strong><em>3. Introduce Variables Reflecting Real-World Events:</em></strong> You simulate a real-world event - in this case, the failure of the payment gateway. This could be replicated by intentionally disabling the payment gateway service.</p>
<p><strong><em>4. Automate Experiments and Try to Disprove the Hypothesis:</em></strong> You run the test automatically using chaos engineering tools during peak business hours when the web traffic is high. Observe what happens: Can users still browse and add items to their carts? Does the checkout process fail gracefully, with users receiving an appropriate error message?</p>
<p><strong><em>5. Minimize Blast Radius:</em></strong> Instead of running this test in the live environment affecting all users, you first run it in a controlled environment, like a clone of your production environment, affecting a limited number of users. This ensures that in case the test reveals serious issues, your entire user base is not affected.</p>
<p> By following these principles, chaos engineering allows businesses to proactively identify and fix potential system weaknesses, ensuring that their services run smoothly, and customer experience is not disrupted, even in the event of unexpected system failures.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713276449020/b0355a02-a930-4037-8071-693c1987fd7e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-real-time-incidents"><strong>Real-time incidents</strong></h3>
<p> Here are a few notable incidents where a lack of chaos engineering has played a part in service disruption or application outage:</p>
<ol>
<li><p><strong>GitHub (October 21, 2018):</strong> GitHub went down for 24 hours due to data storage system failure. The direct cause was a network partition, which led to inconsistency across their data storage system. Chaos engineering practices could have helped detect such a vulnerability beforehand.</p>
<p> <a target="_blank" href="https://github.blog/2018-10-30-oct21-post-incident-analysis/">https://github.blog/2018-10-30-oct21-post-incident-analysis/</a></p>
</li>
</ol>
<ol start="2">
<li><strong>Amazon (September 20, 2015) – Dynamo DB availability issue:</strong> Amazon's DynamoDB faced an availability issue in one of its regional zones, leading to the failure of more than 20 Amazon Web Services that depended on DynamoDB in that particular region. As a result, several websites, including Netflix, experienced downtime for a few hours. However, Netflix's outage was less severe compared to other sites, thanks to their foresight in creating and utilizing a chaos engineering tool known as Chaos Kong, which helped them prepare for such situations.</li>
</ol>
<ul>
<li><p><a target="_blank" href="https://netflixtechblog.com/chaos-engineering-upgraded-878d341f15fa">https://netflixtechblog.com/chaos-engineering-upgraded-878d341f15fa</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/message/5467D2/">https://aws.amazon.com/message/5467D2/</a></p>
</li>
</ul>
<ol start="3">
<li><strong>Amazon AWS (February 28, 2017) – S3 service outage:</strong> A major outage in Amazon's S3 web service resulted in disruption for many online services. The outage was reportedly due to a command that was incorrectly entered during a routine debugging. Chaos engineering could have helped develop safeguards against such human errors.</li>
</ol>
<ul>
<li><p><a target="_blank" href="https://www.gremlin.com/blog/the-2017-amazon-s-3-outage">https://www.gremlin.com/blog/the-2017-amazon-s-3-outage</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/message/41926/">https://aws.amazon.com/message/41926/</a></p>
</li>
</ul>
<h3 id="heading-metrics-to-measure"><strong>Metrics to measure</strong></h3>
<p> Below are some metrics that can help measure the effectiveness of chaos engineering efforts and guide future experiments.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Metric</strong></td><td><strong>Description</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Mean Time to Recovery (MTTR)</td><td>The average time it takes to recover from a failure.</td></tr>
<tr>
<td>Failure Injection Rate</td><td>The rate at which failures are being injected into the system.</td></tr>
<tr>
<td>System Availability</td><td>The percentage of time that the system is available to users.</td></tr>
<tr>
<td>Error Rates</td><td>The rate at which the system is producing errors.</td></tr>
<tr>
<td>Performance Metrics (Response time, Throughput, Latency)</td><td>These metrics should remain stable while injecting failures into the system.</td></tr>
<tr>
<td>Incident Reports</td><td>The number of incidents reported can provide a measure of how often things go wrong.</td></tr>
<tr>
<td>Cost of Downtime</td><td>The cost associated with system downtime, including lost revenue and damage to the brand.</td></tr>
<tr>
<td>Service Level Indicators (SLIs)</td><td>Specific measures of a service's level of performance or reliability.</td></tr>
<tr>
<td>Service Level Objectives (SLOs)</td><td>Targets for the SLIs.</td></tr>
<tr>
<td>Chaos Experiment Success Rate</td><td>The percentage of chaos experiments that pass</td></tr>
<tr>
<td>Time to Detection (TTD)</td><td>The time it takes to detect a failure.</td></tr>
<tr>
<td>Escaped Failures</td><td>Failures that were not caught during chaos experiments and instead were found in production.</td></tr>
<tr>
<td>Number of New Issues Discovered</td><td>The number of new issues or vulnerabilities discovered during chaos experiments</td></tr>
<tr>
<td>User Impact</td><td>The impact of failures on users, such as the number of user complaints or the decrease in user activity during a failure.</td></tr>
</tbody>
</table>
</div><h3 id="heading-tools-and-technologies"><strong>Tools and Technologies</strong></h3>
<p>Chaos Engineering has gained significant popularity as a proactive approach to building resilient systems. To effectively practice chaos engineering, various tools, and technologies have emerged to assist in creating controlled chaos experiments. Some of the most used tools are:</p>
<ul>
<li><p><a target="_blank" href="https://netflix.github.io/chaosmonkey/"><strong>Chaos Monkey</strong></a><strong>:</strong> Developed by Netflix</p>
</li>
<li><p><a target="_blank" href="https://www.gremlin.com/chaos-engineering"><strong>Gremlin</strong></a><strong>:</strong> Developed by Gremlin Inc</p>
</li>
<li><p><a target="_blank" href="https://chaostoolkit.org/"><strong>Chaos Toolkit</strong></a><strong>:</strong> An open-source project developed by ChaosIQ (now Reliably)</p>
</li>
<li><p><a target="_blank" href="https://litmuschaos.io/"><strong>Litmus</strong></a><strong>:</strong> Developed by MayaData &amp; the open-source community</p>
</li>
<li><p><a target="_blank" href="https://powerfulseal.github.io/powerfulseal/"><strong>PowerfulSeal</strong></a><strong>:</strong> Developed by Bloomberg</p>
</li>
<li><p><a target="_blank" href="https://chaos-mesh.org/"><strong>Chaos Mesh:</strong></a> Developed by PingCAP &amp; incubating project at CNCF</p>
</li>
<li><p><a target="_blank" href="https://github.com/alexei-led/pumba"><strong>Pumba</strong></a><strong>:</strong> Developed by Alexei Ledenev</p>
</li>
<li><p><a target="_blank" href="https://github.com/Shopify/toxiproxy"><strong>ToxiProxy</strong></a><strong>:</strong> Developed by Shopify</p>
</li>
</ul>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>Considering the current state of technology, chaos engineering is an essential topic that aids companies in creating robust systems despite growing complexity. By embracing chaos and conducting controlled experiments, organizations can find vulnerabilities, build resilience, and enhance user experience. As long as technology advances, businesses will be able to test and enhance their systems beforehand and make sure they can function even in the face of disorder by utilizing chaos engineering as a potent weapon.</p>
<p>Chaos engineering tools and technologies have significantly contributed to the adoption and practice of chaos engineering. Each tool mentioned above offers unique advantages and disadvantages, and the choice of tool depends on factors such as the organization’s specific requirements, existing infrastructure, and budget constraints. Regardless of the tool chosen, the adoption of chaos engineering practices can help organizations build more resilient systems, proactively identify weaknesses, and ultimately enhance the overall reliability and user experience of their applications.</p>
<p><code>Originally published at</code> <a target="_blank" href="https://wearecommunity.io/communities/india-devtestsecops-community/articles/4859"><code>Wearecommunity-india-devtestsecops-community</code></a></p>
<h3 id="heading-appendix"><strong>Appendix:</strong></h3>
<ol>
<li><p>Prompt used -&gt; <em>Visualize an intense scene where a female firefighter is in full gear, courageously running practice drills amidst roaring flames. The controlled fire is blazing intensely, illuminating the dusk sky. You also see a male firefighter in the background, coordinating operations over the radio. Include multiple firefighting vehicles like fire engines and water tenders, ready with their lights flashing, adding to the drama of the scenario. Capture the dedication of these professionals, their teamwork, and the adrenaline-filled atmosphere.</em>   </p>
</li>
<li><p><strong>References:</strong>      </p>
<ul>
<li><p><a target="_blank" href="https://principlesofchaos.org/">https://principlesofchaos.org/</a>    </p>
</li>
<li><p><a target="_blank" href="https://www.gremlin.com/chaos-engineering">https://www.gremlin.com/chaos-engineering</a></p>
</li>
<li><p><a target="_blank" href="https://www.harness.io/blog/chaos-engineering">https://www.harness.io/blog/chaos-engineering</a>   </p>
</li>
<li><p><a target="_blank" href="https://www.gremlin.com/community/tutorials/chaos-engineering-monitoring-metrics-guide">https://www.gremlin.com/community/tutorials/chaos-engineering-monitoring-metrics-guide</a></p>
</li>
</ul>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[[Java] Reverse words in a String]]></title><description><![CDATA[Problem Statement:
https://leetcode.com/problems/reverse-words-in-a-string/
String input = "Hello World!";
String output = "World! Hello"

Solution-1
import org.testng.annotations.Test;

import java.util.Arrays;

import static org.assertj.core.api.As...]]></description><link>https://blog.rakeshvardan.com/reverse-words-in-a-sentence</link><guid isPermaLink="true">https://blog.rakeshvardan.com/reverse-words-in-a-sentence</guid><category><![CDATA[Java]]></category><category><![CDATA[string]]></category><category><![CDATA[coding]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Thu, 28 Mar 2024 18:48:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713935275613/a949a1db-5be9-436d-b7d6-59a86d35f498.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-problem-statement">Problem Statement:</h3>
<p><a target="_blank" href="https://leetcode.com/problems/reverse-words-in-a-string/">https://leetcode.com/problems/reverse-words-in-a-string/</a></p>
<pre><code class="lang-java">String input = <span class="hljs-string">"Hello World!"</span>;
String output = <span class="hljs-string">"World! Hello"</span>
</code></pre>
<h3 id="heading-solution-1">Solution-1</h3>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> org.testng.annotations.Test;

<span class="hljs-keyword">import</span> java.util.Arrays;

<span class="hljs-keyword">import</span> <span class="hljs-keyword">static</span> org.assertj.core.api.Assertions.assertThat;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Solution1</span> </span>{

    String input1 = <span class="hljs-string">"Hello World!"</span>;             <span class="hljs-comment">// "World! Hello"</span>
    String input2 = <span class="hljs-string">" Hello World! "</span>;           <span class="hljs-comment">// "World! Hello"</span>
    String input3 = <span class="hljs-string">"Hello  World its  me "</span>;    <span class="hljs-comment">// "me its World Hello"</span>

    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">reverseWordsS1</span><span class="hljs-params">(String sentence)</span> </span>{
        String[] words = sentence.split(<span class="hljs-string">" "</span>);
        StringBuilder reversedString = <span class="hljs-keyword">new</span> StringBuilder();

        <span class="hljs-comment">// to remove the empty strings in the array - corner case</span>
        String[] modifiedWordsArray = Arrays.stream(words)
                                            .filter(s -&gt; !s.isEmpty())
                                            .toArray(String[]::<span class="hljs-keyword">new</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = modifiedWordsArray.length - <span class="hljs-number">1</span>; i &gt;= <span class="hljs-number">0</span>; i--) {
            reversedString.append(modifiedWordsArray[i]).append(<span class="hljs-string">' '</span>);
        }
        <span class="hljs-keyword">return</span> reversedString.toString().trim();
    }

    <span class="hljs-meta">@Test</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">testLogic</span><span class="hljs-params">()</span> </span>{
        assertThat(<span class="hljs-keyword">this</span>.reverseWordsS1(input1)).isEqualTo(<span class="hljs-string">"World! Hello"</span>);
        assertThat(<span class="hljs-keyword">this</span>.reverseWordsS1(input2)).isEqualTo(<span class="hljs-string">"World! Hello"</span>);
        assertThat(<span class="hljs-keyword">this</span>.reverseWordsS1(input3)).isEqualTo(<span class="hljs-string">"me its World Hello"</span>);
        assertThat(<span class="hljs-keyword">this</span>.reverseWordsS1(input4)).isEqualTo(<span class="hljs-string">"Hello"</span>);
    }
}
</code></pre>
<p>In this code, the <code>reverseWords</code> function takes a sentence as input, splits the sentence into an array of words, and then appends the words in reverse order to a <code>StringBuilder</code> object. Finally, it returns the reversed string. If any any case, there are empty Strings in the given array, we need to filter them as well.</p>
<p>If you run this code with "Hello World!" as input, it will print: "World! Hello".</p>
<h3 id="heading-solution-2">Solution-2:</h3>
<p>Another approach is to use a stack. Stack follows a last-in, first-out (LIFO) principle which can be used to reverse the words in a sentence.</p>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> org.testng.annotations.Test;

<span class="hljs-keyword">import</span> java.util.Arrays;
<span class="hljs-keyword">import</span> java.util.Stack;

<span class="hljs-keyword">import</span> <span class="hljs-keyword">static</span> org.assertj.core.api.Assertions.assertThat;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Solution2</span> </span>{

    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">reverseWordsS2</span><span class="hljs-params">(String sentence)</span> </span>{
        String[] words = sentence.split(<span class="hljs-string">" "</span>);
        Stack&lt;String&gt; stack = <span class="hljs-keyword">new</span> Stack&lt;&gt;();

        String[] modifiedWordsArray = Arrays.stream(words)
                                        .filter(s -&gt; !s.isEmpty())
                                        .toArray(String[]::<span class="hljs-keyword">new</span>);

        <span class="hljs-keyword">for</span> (String word: modifiedWordsArray) {
            stack.push(word);
        }

        StringBuilder reversedString = <span class="hljs-keyword">new</span> StringBuilder();

        <span class="hljs-keyword">while</span> (!stack.isEmpty()) {
            reversedString.append(stack.pop()).append(<span class="hljs-string">' '</span>);
        }
        <span class="hljs-keyword">return</span> reversedString.toString().trim();
    }
}
</code></pre>
<p>In this version of the code, it splits the sentence into words and then pushes each word onto a stack. It then pops each word off the stack, which results in the words being in reverse order, and appends them to a <code>StringBuilder</code> object. Finally, it returns the reversed string.</p>
<h3 id="heading-solution-3">Solution-3:</h3>
<p>We can also use Java 8's Stream API to reverse the words in a sentence.</p>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> java.util.Arrays;
<span class="hljs-keyword">import</span> java.util.stream.Collectors;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Solution3</span> </span>{
    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">reverseWordsS3</span><span class="hljs-params">(String sentence)</span> </span>{
        String[] words = sentence.split(<span class="hljs-string">" "</span>);
        String[] modifiedWordsArray = Arrays.stream(words)
                                        .filter(s -&gt; !s.isEmpty())
                                        .toArray(String[]::<span class="hljs-keyword">new</span>);

        <span class="hljs-keyword">return</span> Arrays.stream(modifiedWordsArray)
                                .reduce((firstWord, secondWord) -&gt; secondWord + <span class="hljs-string">" "</span> + firstWord)
                                .orElse(sentence);
    }
}
</code></pre>
<p>In this version of the code, it uses the <code>split</code> method to divide the sentence into an array of words. This <a target="_blank" href="http://Arrays.stream"><code>Arrays.stream</code></a> creates a Stream of this array. The <code>reduce</code> operation then combines these words in reverse order. The <code>orElse</code> method returns the original sentence if it consists of only one word without space. Finally, the reversed sentence is returned.</p>
<h3 id="heading-solution-4">Solution-4:</h3>
<p>Another approach using the Deque interface in Java. A Deque (short for Double Ended Queue) is a data structure that allows you to insert and remove elements from both ends. This makes it very suitable for problems involving reversal or rotation.</p>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> java.util.ArrayDeque;
<span class="hljs-keyword">import</span> java.util.Deque;
<span class="hljs-keyword">import</span> java.util.List;
<span class="hljs-keyword">import</span> java.util.ArrayList;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Solution4</span> </span>{
   <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">reverseStringS4</span><span class="hljs-params">(String sentence)</span> </span>{
        String[] words = sentence.split(<span class="hljs-string">" "</span>);

        <span class="hljs-comment">// to remove the empty strings in the array - another approach</span>
        List&lt;String&gt; list = <span class="hljs-keyword">new</span> ArrayList&lt;&gt;(Arrays.asList(words));
        list.removeIf(String::isEmpty);

        String[] modifiedArray = list.toArray(<span class="hljs-keyword">new</span> String[<span class="hljs-number">0</span>]);
        Deque&lt;String&gt; stack = <span class="hljs-keyword">new</span> ArrayDeque&lt;&gt;();

        <span class="hljs-keyword">for</span> (String word : modifiedArray) {
            stack.push(word);
        }

        StringBuilder reversedSentence = <span class="hljs-keyword">new</span> StringBuilder();

        <span class="hljs-keyword">while</span> (!stack.isEmpty()) {
            reversedSentence.append(stack.pop());
            <span class="hljs-keyword">if</span> (!stack.isEmpty()) {
                reversedSentence.append(<span class="hljs-string">" "</span>);
            }
        }
        <span class="hljs-keyword">return</span> reversedSentence.toString();
    }
}
</code></pre>
<p>In this code, the <code>reverseWords</code> function splits the sentence into an array of words and pushes each word onto a stack (implemented as an <code>ArrayDeque</code>). Then it pops each word off the stack (which results in the words being in reverse order) and appends them to a <code>StringBuilder</code>. If the stack is not empty after popping a word, it adds a space to the <code>StringBuilder</code>. Finally, the function returns the reversed sentence.</p>
<h3 id="heading-comparative-table">Comparative table:</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Approach</td><td>Advantages</td><td>Disadvantages</td><td>Time Complexity</td><td>Space Complexity</td></tr>
</thead>
<tbody>
<tr>
<td>1. Split &amp; Loop</td><td>- Easy to understand.  - Simple implementation.</td><td>- Uses extra space for creating the array and StringBuilder.</td><td>O(n)</td><td>O(n)</td></tr>
<tr>
<td>2. Using Stack</td><td>- Consistent with principles of data structure.</td><td>- Uses extra space for creating the stack and StringBuilder.</td><td>O(n)</td><td>O(n)</td></tr>
<tr>
<td>3. Using Stream API</td><td>- Clean and concise syntax.  - The functional style can improve readability.</td><td>- Can be slower due to the overhead of Streams.  - May be hard to understand for those not familiar with functional programming.</td><td>O(n)</td><td>O(n)</td></tr>
<tr>
<td>4. Using Deque</td><td>- Well covers the concept of data structures.</td><td>- Uses extra space for the Deque and StringBuilder.</td><td>O(n)</td><td>O(n)</td></tr>
</tbody>
</table>
</div><h3 id="heading-notes">Notes:</h3>
<ul>
<li><p>In the time complexity, 'n' is the length of the input string.</p>
</li>
<li><p>All of the mentioned approaches have the same time complexity <strong>(O(n))</strong> because they all traverse the string once.</p>
</li>
<li><p>All the other approaches use additional data structures and thus, have a linear space complexity <strong>(O(n))</strong>. Also, space complexity includes both fixed and variable space.</p>
</li>
<li><p><a target="_blank" href="https://github.com/rakesh-vardan/daily-practice/blob/master/src/test/java/com/me/coding/problems/leetcode/string/ReverseWordsInString.java">GitHub</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Azure AI-900 Certification: My Personal Experience]]></title><description><![CDATA[I am happy to share that I passed the AI-900 exam recently with a score of 928/1000. Here are some inputs from my experience, that might be helpful to anyone planning for the same.
The below materials helped with my exam preparation:

Microsoft Offic...]]></description><link>https://blog.rakeshvardan.com/azure-ai-900-certification-my-personal-experience</link><guid isPermaLink="true">https://blog.rakeshvardan.com/azure-ai-900-certification-my-personal-experience</guid><category><![CDATA[ai900]]></category><category><![CDATA[azure certified]]></category><category><![CDATA[AI]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Wed, 27 Mar 2024 11:42:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711538892074/20f07dd4-7ff1-4aa7-ad97-777e788e3ac0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am happy to share that I passed the <a target="_blank" href="https://learn.microsoft.com/en-us/credentials/certifications/azure-ai-fundamentals/?practice-assessment-type=certification"><strong>AI-900</strong></a> exam recently with a score of 928/1000. Here are some inputs from my experience, that might be helpful to anyone planning for the same.</p>
<h3 id="heading-the-below-materials-helped-with-my-exam-preparation"><strong>The below materials helped with my exam preparation:</strong></h3>
<ol>
<li><p>Microsoft Official <a target="_blank" href="https://learn.microsoft.com/en-us/credentials/certifications/exams/ai-900/">Documentation</a> - good material with all basics &amp; hands-on labs on Azure. <em>A</em> <em>must-read for everyone.</em></p>
</li>
<li><p>AI-900 full course by ExamPro - available on <a target="_blank" href="https://www.youtube.com/watch?v=OwZHNH8EfSU">YouTube</a> - 4hrs video - some of the concepts are outdated, due to service name changes in Azure. But gives a detailed outlook in a short time.</p>
</li>
<li><p>Cheat-sheet from <a target="_blank" href="https://www.whizlabs.com/blog/wp-content/uploads/2020/12/AI-900-whizcards.pdf">Whizcards</a> for a quick refresher before the exam - some of the concepts are outdated, due to service name changes in Azure.</p>
</li>
</ol>
<h3 id="heading-practice-tests"><strong>Practice tests:</strong></h3>
<ol>
<li><p>Official practice <a target="_blank" href="https://learn.microsoft.com/en-us/credentials/certifications/exams/ai-900/practice/assessment?assessmentId=26&amp;assessment-type=practice">tests</a> from MS ~200+ questions (tried multiple times, with some repeated questions though)</p>
</li>
<li><p>Practice <a target="_blank" href="https://www.examtopics.com/exams/microsoft/ai-900/view/1/">questions</a> from ExamTopics  ~120 questions with a free account - some of these appeared in the actual exam!</p>
</li>
</ol>
<h3 id="heading-exam-experience"><strong>Exam experience:</strong></h3>
<ol>
<li><p>I chose to attempt from a test center near where I live.</p>
</li>
<li><p>There are a total of <strong>42</strong> questions that need to be answered in <strong>45</strong> minutes. We need to manage time effectively. Got different question types - selecting a single answer, multiple answers, drag-drop, etc.</p>
</li>
<li><p>The first 5-6 questions were very tricky, with lengthy descriptions &amp; images - which will waste our time. Though there are very simple questions at the end of the exam!</p>
</li>
<li><p>Minimum 3-4 questions from Responsible AI in Azure - very important topic.</p>
</li>
<li><p>Some questions around - OpenAI/AzureOpenAI/ChatGPT/Github Co-pilot/LLM models from OpenAI.</p>
</li>
<li><p>As usual, we can mark any question for review and check later - the best option is to cover all the questions at least once in the given time frame.</p>
</li>
<li><p>Given my preparation, I felt the test was of medium complexity.</p>
</li>
</ol>
<p><em>Here is my</em> <a target="_blank" href="https://learn.microsoft.com/en-in/users/rakeshbudugu-7267/credentials/ed48afcf032f5521?ref=https%3A%2F%2Fwww.linkedin.com%2F"><em>certificate</em></a></p>
<p>All the best with your preparation!</p>
]]></content:encoded></item><item><title><![CDATA[How to Use Chrome's Copy as CURL for Postman API Calls]]></title><description><![CDATA[During the development process, many times we need to replicate the backend API call for debugging purposes. This is also required to write the automation & performance tests with proper API configuration details to successfully get the response as e...]]></description><link>https://blog.rakeshvardan.com/how-to-use-chromes-copy-as-curl-for-postman-api-calls</link><guid isPermaLink="true">https://blog.rakeshvardan.com/how-to-use-chromes-copy-as-curl-for-postman-api-calls</guid><category><![CDATA[Postman]]></category><category><![CDATA[curl]]></category><category><![CDATA[REST API]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Tue, 26 Sep 2023 10:53:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/NScCnMEYHQ0/upload/733b63ad0eb2c9cc16324e25cce8fd22.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>During the development process, many times we need to replicate the backend API call for debugging purposes. This is also required to write the automation &amp; performance tests with proper API configuration details to successfully get the response as expected. Manually recreating the request with the correct URI, headers, and cookies can be tedious, and time-consuming too. Fortunately, most modern browsers come with a simple utility bundled along to solve this issue.</p>
<p>Let's take an example API call for this. We will be using the below free fake API site to demonstrate this.</p>
<p><a target="_blank" href="https://jsonplaceholder.typicode.com/">JSON Placeholder Fake API</a></p>
<ul>
<li><p>Open the above link in the <strong>Chrome</strong> browser</p>
</li>
<li><p>Open developer tools either via navigating to <strong>Chrome Options &gt; More Tools &gt; Developer Tools</strong> OR using the shortcut <strong>CTRL + SHIFT + I / CMD + OPTION + I (Mac)</strong></p>
</li>
<li><p>Click on the <em>Run Script</em> button to invoke the below sample API and see the response in the <em>developer tools</em> under the <em>Network</em> tab</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695721087573/1740e9fd-e4fe-4bf8-92b7-577e3048c123.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Take a look at the request details like - Request URL, headers, cookies, etc.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695721308408/c61cf704-6780-47ad-a884-b2ce222c0afd.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695721366533/9fc8b19a-d5a6-431a-bdd4-4740835131c3.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Select the request and right-click to see the options menu, then choose <strong>Copy &gt; Copy as cURL</strong></li>
</ul>
<blockquote>
<p><a target="_blank" href="https://curl.se/">Curl</a> (short for "Client URL") is a command-line tool that enables data transfer over various network protocols.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695721663892/1c262d62-19c3-45fc-a752-db387e249179.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>If the chrome is opened in windows, there will be 2 options for cURL - one for BASH and another for CMD</p>
</blockquote>
<ul>
<li>Now the entire request details are added to the clipboard, something like this</li>
</ul>
<pre><code class="lang-bash">curl <span class="hljs-string">'https://jsonplaceholder.typicode.com/todos/1'</span> \
  -H <span class="hljs-string">'authority: jsonplaceholder.typicode.com'</span> \
  -H <span class="hljs-string">'accept: */*'</span> \
  -H <span class="hljs-string">'accept-language: en-US,en;q=0.9,te;q=0.8'</span> \
  -H <span class="hljs-string">'cookie: _ga=GA1.1.63507224.1695719458; ajs_anonymous_id=c601a09a-3cc6-4b84-a482-f58a353eea86; _ga_E3C3GCQVBN=GS1.1.1695719458.1.1.1695720002.0.0.0'</span> \
  -H <span class="hljs-string">'if-none-match: W/"53-hfEnumeNh6YirfjyjaujcOPPT+s"'</span> \
  -H <span class="hljs-string">'referer: https://jsonplaceholder.typicode.com/'</span> \
  -H <span class="hljs-string">'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Google Chrome";v="116"'</span> \
  -H <span class="hljs-string">'sec-ch-ua-mobile: ?0'</span> \
  -H <span class="hljs-string">'sec-ch-ua-platform: "macOS"'</span> \
  -H <span class="hljs-string">'sec-fetch-dest: empty'</span> \
  -H <span class="hljs-string">'sec-fetch-mode: cors'</span> \
  -H <span class="hljs-string">'sec-fetch-site: same-origin'</span> \
  -H <span class="hljs-string">'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36'</span> \
  --compressed
</code></pre>
<h2 id="heading-use-in-terminal">Use in terminal</h2>
<p>We can simply paste the copied content and hit the request in the terminal. As we see, the correct response is being shown.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695722127108/d6f08b45-5c15-4aef-afc3-53b6ca0e6ff2.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-use-in-postman">Use in Postman</h2>
<p><a target="_blank" href="https://www.postman.com/">Postman</a> has an import option to load the API requests. Let's use the below option to import the request and check the response.</p>
<ul>
<li>Open Postman, navigate to <strong>File &gt; Import &gt; Raw text</strong> option paste the text that we copied in the previous step, and click <strong>Continue</strong> and then <strong>Import</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695722413354/1264fb6f-6a04-4bdb-914d-23c848926bfb.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Once the request is sent to the server, we see the same response.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695722797615/05b03cf6-eeac-4410-a991-739daa586fff.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>We may ignore the response codes for this example - as the server responds with 304 in the browser due to internal redirection logic whereas it gives 200 in the postman for the same call.</p>
</blockquote>
<p>Using the copied cURL command in <em>Postman</em>, we can effortlessly edit it according to our needs and begin utilizing the APIs immediately. Other browsers - Firefox, Edge, Safari, etc. also have similar features to get this info from the developer tools making developers' lives easy.</p>
<p>Thank you for reading!</p>
]]></content:encoded></item><item><title><![CDATA[Building A CI/CD Pipeline With Travis CI, Docker, And LambdaTest]]></title><description><![CDATA[This article was originally published on LambdaTest's official blog.
With the help of well-designed Continuous Integration systems in place, teams can build quality software by developing and verifying it in smaller increments. Continuous Integration...]]></description><link>https://blog.rakeshvardan.com/building-a-cicd-pipeline-with-travis-ci-docker-and-lambdatest</link><guid isPermaLink="true">https://blog.rakeshvardan.com/building-a-cicd-pipeline-with-travis-ci-docker-and-lambdatest</guid><category><![CDATA[Docker]]></category><category><![CDATA[selenium]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[TravisCI]]></category><category><![CDATA[LambdaTest]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Wed, 06 Sep 2023 03:44:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9AxFJaNySB8/upload/65645617e8251e2e4c8b4df8d7261a35.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was originally published on LambdaTest's official</em> <a target="_blank" href="https://www.lambdatest.com/blog/ci-cd-pipeline-with-travis-ci-docker-and-lambdatest/"><strong><em>blog</em></strong></a><em>.</em></p>
<p>With the help of well-designed Continuous Integration systems in place, teams can build quality software by developing and verifying it in smaller increments. <a target="_blank" href="https://www.lambdatest.com/blog/what-is-continuous-integration-and-continuous-delivery/">Continuous Integration (CI)</a> is the process of pushing small sets of code changes frequently to the common integration branch rather than merging all the changes at once. This avoids big-bang integration before a product release. <a target="_blank" href="https://www.lambdatest.com/selenium-automation">Test automation</a> and Continuous Integration are an integral part of the software development life cycle. As the benefits of following DevOps methodology in a development team becomes significant, teams have started using different tools and libraries like Travis CI with Docker to accomplish this activity.</p>
<p>In this blog, we will discuss the role of Travis CI in Selenium test automation and dive deep into using Travis CI with Docker. We will also look at some effective Travis CI and Docker examples. We would also integrate and <a target="_blank" href="https://www.lambdatest.com/automate-selenium-tests-with-travisci">automate Selenium tests suites with Travis CI</a> on the LambdaTest cloud grid.</p>
<p>Without further ado, Let’s build a CI/CD pipeline with Travis CI, Docker, and LambdaTest.</p>
<h2 id="heading-overview-of-travis-ci"><strong>Overview Of Travis CI</strong></h2>
<p><a target="_blank" href="https://travis-ci.org/">Travis CI</a> is a cloud-based service available for teams for building and testing applications. As a continuous integration platform, Travis CI supports the development process by automatically building and testing the code changes, providing immediate feedback if the change is successful. It can also help us automate other steps in the software development cycle, such as managing the deployments and notifications.</p>
<p>Travis CI can build and test projects on the cloud platform hosted on code repositories like GitHub. We could also use other <a target="_blank" href="https://www.lambdatest.com/blog/31-best-ci-cd-tools/">best CI/CD tools</a> such as Bitbucket, GitLab, and Assembla, but some of them are still in the beta phase. When we run a build, Travis CI clones the repository into a brand-new virtual environment and performs a series of steps to build and test the code.</p>
<p>When a build is triggered, Travis CI clones the GitHub repository into a brand-new virtual environment and performs a series of tasks to build and test the code. If any of those jobs fails, the build is considered broken; else, the build is successful. On the success of the build, Travis CI will deploy the code to a web server or an application host.</p>
<p>In case you are eager to learn about the Travis CI/CD pipeline, please refer to our detailed blog that deep dives into how to build your first <a target="_blank" href="https://www.lambdatest.com/blog/build-your-first-ci-cd-pipeline-with-travis-ci/">CI/CD pipeline with Travis CI</a>.</p>
<h3 id="heading-travisyml-configuration"><strong>.travis.yml Configuration</strong></h3>
<p>Builds on Travis CI are configured mostly through the build configuration defined in the file .travis.yml in the code repository. It allows the configuration to be version-controlled and flexible. Once the application code is completed, we need to add the .travis.yml file to the repository.</p>
<p>It contains the instructions on what to build and how exactly to test the application. Travis CI performs all the steps configured in this YAML file.</p>
<p>Some of the commonly used terms associated with Travis CI are:</p>
<ul>
<li><p><strong>Job</strong> – Job is an automated process that clones the code repository into a brand-new virtual environment and performs a set of actions such as compiling the code, running tests, deploying the artifact, etc.</p>
</li>
<li><p><strong>Phase</strong> – Job is a collection of different phases. Sequential steps, aka phases, collectively create a job in Travis CI. For example, install phase, script phase, before_install phase, etc.</p>
</li>
<li><p><strong>Build</strong> – Build refers to a group of jobs that are running in a sequence. For example, a build can have two jobs defined in it. Each job tests the project with a different version of the programming language. The build is finished only when all of its jobs have completed execution.</p>
</li>
<li><p><strong>Stage</strong> – Stage refers to the group of jobs that run in parallel.</p>
<h3 id="heading-features-of-travis-ci"><strong>Features Of Travis CI</strong></h3>
<p>  Some of the salient features that Travis CI provides as a CI/CD platform are given below:</p>
<p>  1. Free cloud-based hosting</p>
<p>  2. Automatic integration with GitHub</p>
<p>  3. Safelisting or Blocklisting branches</p>
<p>  4. Pre-installed tools and libraries for build and test</p>
<p>  5. Provision to add build configuration via a shell script</p>
<p>  6. Caching the dependencies</p>
<p>  7. Building Pull-Requests</p>
<p>  8. Support for multiple programming languages.</p>
<p>  9. Build Configuration Validation.</p>
<p>  10. Easily set up CRON jobs.</p>
<p>  In case you want to deep dive into how Jenkins (the preferred open-source CI/CD tool) stacks up against Travis CI, please refer to our blog <a target="_blank" href="https://www.lambdatest.com/blog/travis-ci-vs-jenkins/">Travis CI Vs. Jenkins</a> to make an informed decision.</p>
<h2 id="heading-overview-of-docker"><strong>Overview Of Docker</strong></h2>
<p>  According to the Stack Overflow 2021 Developer survey, Docker is one of the most used containerization platforms to develop, ship, and run applications.</p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-36.png" alt="unnamed (36)" /></p>
  <center><em><a href="https://insights.stackoverflow.com/survey/2021#most-loved-dreaded-and-wanted-tools-tech-want" target="_blank">Source</a></em></center>

<p>  Docker enables us to separate applications from the infrastructure so that the application software can be delivered quickly. In short, Virtual Machines(VM) virtualizes the hardware, whereas Docker virtualizes the Operating System(OS).</p>
<p>  In a Virtual Machine – multiple operating systems are run on a single host machine using the virtualization technology known as Hypervisor. Since the full OS needs to be loaded, there may be a delay while booting up the machines; also, there is an overhead of installing and maintaining the operating systems. Whereas in Docker, the host operating system itself is leveraged for virtualization.</p>
<p>  Docker daemon, installed on the host machine, manages all the heavy lifting tasks like listening to the API requests, managing docker objects (images, containers, and volumes), etc. In case you are looking to leverage Docker with Selenium, do check out our detailed blog that outlines how to run <a target="_blank" href="https://www.lambdatest.com/blog/run-selenium-tests-in-docker/">Selenium tests in Docker</a>.</p>
</li>
<li><p>Some of the useful concepts Docker leverages from the Linux world are as follows:</p>
</li>
<li><p><strong>Namespaces</strong> – Docker uses a technology called namespaces to provide the isolated workspace called container. For each container, Docker creates a set of namespaces when we launch it. Some namespaces in Linux are pid, net, ipc, mnt, and uts.</p>
</li>
<li><p><strong>Control Groups</strong> – Docker Engine on Linux also uses another technology known as control groups (cgroups). A cgroup restricts an application to a limited set of resources.</p>
</li>
<li><p><strong>Union file systems</strong> – Union file systems, or UnionFS, are lightweight and fast file systems that work by creating layers. Docker Engine makes use of UnionFS to provide container building blocks.</p>
</li>
</ul>
<ul>
<li><p><strong>Container Format</strong> – Docker Engine combines the namespaces, control groups, and UnionFS into a container format wrapper. The default container format is libcontainer.</p>
<h3 id="heading-basics-of-docker"><strong>Basics Of Docker</strong></h3>
<p>  Docker uses a client-server architecture at its core. Some of the common objects and terminology used in Docker are explained below.</p>
<ul>
<li><p><strong>Image</strong> – An image is a read-only template with all the required instructions to create a Docker container. The image is a collection of files and metadata. These files collectively form the root filesystem for the container. Typically images are made up of layers which are stacked on top of each other. And these images can share layers to optimize disk usage, memory usage, and transfer times.</p>
</li>
<li><p><strong>Container</strong> – Container is the runnable instance of an image. Containers have their lifecycle to create, start, stop, delete, and move using the Docker client API commands. Containers are by default isolated from other containers and the host machine. Basically, the container is defined by the image it is created from. If required, we need to give specific configuration or environment variables before starting it.</p>
</li>
<li><p>Engine – Docker Engine is a client-server application that includes the following major components.</p>
<ul>
<li><p>A server is a form of long-running application known as a daemon process (i.e. the Dockerd command).</p>
</li>
<li><p>A REST API defines interfaces that applications can use to communicate with the daemon and instruct it on what needs to be done.</p>
</li>
<li><p>A client for the command-line interface (CLI) (i.e. the Docker command).</p>
</li>
</ul>
</li>
<li><p><strong>Registry</strong> – Registry is a place where all the Docker images are stored and distributed. Docker hub manages all the openly available docker images.</p>
</li>
<li><p><strong>Network</strong> – Docker uses a networking subsystem on the configurable host OS, using different drivers. These drivers provide the core networking functionality.</p>
<ul>
<li><p>Bridge</p>
</li>
<li><p>Host</p>
</li>
<li><p>Overlay</p>
</li>
<li><p>Macvlan</p>
</li>
<li><p>none</p>
</li>
</ul>
</li>
<li><p><strong>Volume</strong> – Since Docker containers are ephemeral by default, we need a provision to store the data between 2 different containers. Docker volumes provide this functionality where the data is persisted across containers.</p>
</li>
<li><p><strong>Dockerfile</strong> – It is a flat-file with instructions on how to create an image and run it.</p>
</li>
<li><p><strong>Docker-compose</strong> – Docker compose is a utility tool for defining and running multi-container docker applications. Here we use a YAML file to configure the services required for the application. By using a single command, we can start all the services and also establish dependencies between them.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-how-to-integrate-travis-ci-with-selenium-using-docker"><strong>How to Integrate Travis CI with Selenium using Docker</strong></h2>
<h3 id="heading-using-docker-images"><strong>Using Docker Images</strong></h3>
<p>    Now that we have understood the fundamentals of Travis CI and Docker let us start using them together to run some UI tests with Selenium WebDriver.</p>
<p>    The first step is to define a test scenario for our Travis CI and Docker Example. Here is the Selenium test scenario:</p>
<p>    1. Open a specific browser – Chrome, Firefox, or Edge</p>
<p>    2. Navigate to the <a target="_blank" href="https://lambdatest.github.io/sample-todo-app/">sample application</a></p>
<p>    3. Verify headers on the page</p>
<p>    4. Select the first two checkboxes and see if they are selected</p>
<p>    5. Clear the textbox – Want to add more and enter some text into it and click on Add</p>
<p>    6. Check if the new item is added to the list and verify its text</p>
<p>    Here is the code that we need to write using Selenium WebDriver agnostic of any browser. First, we use the BrowserFactory to instantiate the browser based on the input and pass it on to our tests.</p>
<p>    We are using TestNG and adding appropriate annotations for our tests. We also configured the testng.xml file to include what tests to run.</p>
<p>    And finally, we use the below Maven command to trigger the tests. In case you are relatively new to Maven, do check out our detailed blog that will help you getting started with <a target="_blank" href="https://www.lambdatest.com/blog/getting-started-with-maven-for-selenium-testing/">Maven for Selenium Automation.</a></p>
<p>    The entire code for this project is available on the <a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">GitHub repository</a><a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">.</a></p>
<p>    <a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">Since our goal</a> is to configure and run the Selenium tests using Travis <a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">CI and Docker, y</a>ou should check the following prerequisites:</p>
<p>    1. Active GitHub account, with the code repository available in GitHub<a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">.</a></p>
<p>    <a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">2. Active Trav</a>isCI account and the required permissions to access the <a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">repository from G</a>itHub.</p>
<p>    Once all these prerequisites are met, let’s take a quick look at the t<a target="_blank" href="https://github.com/rakesh-vardan/travisci-selenium-docker-lambdatest">ravis.yml file co</a>nfigured in the project.</p>
<p>    As we learned earlier, travis.yml is the configuration file that we use to instruct the Travis CI cloud server on what actions to perform for the given project.</p>
<p>    Let’s understand all these details one by one.</p>
<p>    <strong>dist</strong>: When a new job is started in Travis CI, it spins up a new virtual environment in the cloud infrastructure. Each build is run in the pre-configured virtualization environments. Then, using the dist key, we specify the infrastructure needed to be spun up by the server. Some of the available values for dist in the Linux environment are trusty, precise, bionic, etc.</p>
<p>    <strong>language</strong>: Language key is used to specify the language support needed during the build process. We can choose the language key appropriately from the list of available values. Some examples include Java, Ruby, Go, etc.</p>
<p>    <strong>jdk</strong>: The key JDK is used while the language key is selected as Java. We basically give the version of the JDK that needs to be used while building the project. In our case, it should be oraclejdk8 as we are using Java 1.8 version.</p>
<p>    <strong>script</strong>: Script key is used to run the actual build command or script specific to the selected language or environment. In the example, we use mvn commands as we have created a maven-based project for the example.</p>
<p>    <strong>before_script</strong>: It is used to run any commands before running the actual script commands. Before running the Maven command in the script phase, we need to ensure the Docker environment is set up successfully.</p>
<p>    <strong>cache</strong>: This key is used to cache the content that doesn’t change frequently, and running this can help speed up the build process. Cache has many supported keys available for different caching strategies. Some of them are directories, npm, cache, etc.</p>
<p>    <strong>directories</strong>: This is a caching strategy used for caching the directories based on the given string path.</p>
<p>    We can also use the build configure explorer from Travis CI and check how the Travis CI system reads and understands the configuration. This is helpful in validating the configuration, adding the correct keys, and updating as per the latest specifications.</p>
<p>    Once the prerequisites mentioned above are met, we can start with our build-in Travis CI with Docker. There are various ways in which we can start the builds.</p>
<p>    1. Commit the code and push to Github; the default hook will trigger a new build on the server.</p>
<p>    2. Manually trigger a build from the dashboard views.</p>
<p>    3. Trig<a target="_blank" href="https://docs.travis-ci.com/user/triggering-builds/">ger with the</a> <a target="_blank" href="https://docs.travis-ci.com/user/triggering-builds/">Travis CI API</a>.</p>
<p>    Here is the default dashboard view of my user in Travis CI. As we can see, I have activated only two projects from GitHub.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-35.png" alt="unnamed (35)" /></p>
<p>    From any of the options described above, Travis CI will start performing the below actions when a build is triggered:</p>
<p>    1. Reads the configuration defined in travis.yml</p>
<p>    2. Adds the build to the job queue</p>
<p>    3. Start creating the brand new virtual environment to execute the steps mentioned in the travis.yml file</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-34.png" alt="unnamed (34)" /></p>
<p>    In the View config tab, we can also see the build configuration as read by Travis CI. Since we haven’t specified any os value in the travis.yml file, it takes the default value as Linux and proceeds with the build set-up.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-33.png" alt="unnamed (33)" /></p>
<p>    Once the build is completed, we could see the image banner which indicates that the build has passed successfully. It also gives us other information such as branch information, time taken for execution, latest commit message, operating system information, build environment, etc.</p>
<p>    Let’s navigate to the Job log section and understand what Travis CI and Docker have performed for our build.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-32.png" alt="unnamed (32)" /></p>
<p>    In the Job log section, we can see the complete log information for the current job. As seen below, Travis CI has spun up the below worker for our build. It has created a new instance in the Google Cloud Platform, and the start-up time is around 6.33 seconds.</p>
<p>    Travis CI will dynamically decide which platform (like GCP Compute Engine or AWS EC2 machines) to choose for the instance. If an enterprise version is used, we could also configure our infrastructure to be used by Travis CI.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-31.png" alt="unnamed (31)" /></p>
<p>    In the Build system information from the logs, we could identify the build language used, the distribution of the OS, Kernel version, etc. For each operating system, Travis CI has a set of default tools defined already. Travis CI will install and configure all the required services required for the build. In this case, Git is installed.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-30.png" alt="unnamed (30)" /></p>
<p>    Also, the Docker client and server will be installed by default. We can verify the version used by these components in the below screenshot:</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-29.png" alt="unnamed (29)" /></p>
<p>    Since our build config uses Java JDK, Travis CI will configure the JDK as specified in the travis.yml file and update the JAVA_HOME path variable as appropriate.</p>
<p>    Next, it clones the latest code from the specific branch(master in our example) and updates the current directory. It also reads the environment variables if defined and updates the set-up as needed. Finally, it also checks if there is any build-cache defined for the project. We will discuss more about build cache in the later section.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-28.png" alt="unnamed (28)" /></p>
<p>    Before pointing our tests to a remote server, we should have a browser configured on a specific port and listen for the incoming requests. From our example, we have configured the RemoteWebDriver as below for launching our tests in Chrome.</p>
<p>    If you remember the travis.yml file in our project, we have a before_script stage defined, and we have already added the below two commands.</p>
<p>    Let’s understand the commands that we have used.</p>
<p>    The first one is the Docker run command that starts the Selenium Firefox container on port 4444.</p>
<p>    <strong>docker</strong>: Docker is the base command for its CLI</p>
<p>    <strong>run</strong>: Runs the commands in a new container</p>
<p>    <strong>-d</strong>: Command option for Docker run to run the container in the background</p>
<p>    <strong>-p</strong>: Command option for Docker run to publish a container’s port to the host port. Here we are mapping the 4444 port of the container to the 4444 port in the host. The format is host port: container port</p>
<p>    <strong>-v</strong>: Command option for Docker run to bind mount a volume. Here we are using the shared memory /dev/shm in the container and mapping it to the shared memory /dev/shm of the host.</p>
<p>    Next is the image that we are using to run the docker command. Here we are using the Selenium Firefox standalone image with tag 4.0.0-rc-1-prerelease-20210618. More information about the latest versions available can be found from the official <a target="_blank" href="https://github.com/SeleniumHQ/docker-selenium">Selenium Git</a><a target="_blank" href="https://github.com/SeleniumHQ/docker-selenium">hub repository and the Doc</a><a target="_blank" href="https://hub.docker.com/u/selenium">ker hub p</a><a target="_blank" href="https://hub.docker.com/u/selenium">ublic registry.</a></p>
<p>    <a target="_blank" href="https://github.com/SeleniumHQ/docker-selenium">Using the below command,</a> Travis C<a target="_blank" href="https://hub.docker.com/u/selenium">I pulls the specified Sele</a>nium Firefox image from the Docker re<a target="_blank" href="https://github.com/SeleniumHQ/docker-selenium">gistry and starts creating</a> a new co<a target="_blank" href="https://hub.docker.com/u/selenium">ntainer with</a> <a target="_blank" href="https://github.com/SeleniumHQ/docker-selenium">the provided configuration</a>. Once it <a target="_blank" href="https://hub.docker.com/u/selenium">has suc</a><a target="_blank" href="https://github.com/SeleniumHQ/docker-selenium">cessfully created a contai</a>ner, we c<a target="_blank" href="https://hub.docker.com/u/selenium">an see the list of the run</a>ning containe<a target="_blank" href="https://github.com/SeleniumHQ/docker-selenium">rs using the second comman</a>d:</p>
<p>    As we observe from the job logs, the docker run has been executed successfully, and the running container has been listed with the container ID, image name used, ports mapped between container and host, volumes mounted, and status of the container.</p>
<p>    With this, the required setup is completed for running our tests with Travis CI and Docker. The next step is to start sending the HTTP requests to this container on a specific port. We do this using the Maven command as shown below.</p>
<p>    Here we are using the Maven build command along with TestNG to clean the repository and using the install phase with the command arguments such as the suite file, required browser to use, and the grid URL.</p>
<p>    <a target="_blank" href="https://www.lambdatest.com/blog/create-testng-xml-file-execute-parallel-testing/">Since the Firefox contai</a>ner is running in the Travis CI instance on port 4444, we send the test requests to this port on the mentioned URL.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-27.png" alt="unnamed (27)" /></p>
<p>    Once the Maven build is started, Travis CI downloads the requ<a target="_blank" href="https://www.lambdatest.com/blog/create-testng-xml-file-execute-parallel-testing/">ired dependencies from the M</a>aven cent<a target="_blank" href="https://www.lambdatest.com/blog/create-testng-xml-file-execute-parallel-testing/">ral repository for the first</a> time. Since we are using the cache configuration for Maven as below, it uses the cache for subsequent runs.</p>
<p>    Ther<a target="_blank" href="https://www.lambdatest.com/blog/create-testng-xml-file-execute-parallel-testing/">efore, Travis CI can cache t</a>he content for the build so that the build process can be sped up. In order to use this caching feature, we need to set the Build pushed branches to ON in the repository settings.</p>
<p>    Once all the tests are executed on the browser which is available in the Docker container, we get the results and the build status as zero(0).</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-26.png" alt="unnamed (26)" /></p>
<p>    We have successfully performed our first Selenium test automation with Travis CI and Docker. If the build completes with an exit code as zero(0), it will be treated as passed. For many of us in Selenium test automation, it is required to see the test results to assess the regression suite quality. So we just added a phase in the travis.yml file to deploy the artifacts back to GitHub once the build is completed.</p>
<p>    Since we have used TestNG in our Travis CI and Docker example, we get test execution reports in a clean and precise format in the target folder.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-25.png" alt="unnamed (25)" /></p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-24.png" alt="unnamed (24)" /></p>
<p>    We are trying to deploy the index.html and emailable-report.html as per the below details.</p>
<p>    Deploy phase details from the Travis CI build logs.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-23.png" alt="unnamed (23)" /></p>
<p>    In order to publish the artifacts to GitHub, we need to create a personal access token from the <a target="_blank" href="https://github.com/settings/tokens">GitHub developer settings p</a><a target="_blank" href="https://github.com/settings/tokens">age and add it to the environm</a>ent variables section of <a target="_blank" href="https://github.com/settings/tokens">the Travis CI repository. The</a> same variable api_key is used in the deploy phase of our configuration.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-22.png" alt="unnamed (22)" /></p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-21.png" alt="unnamed (21)" /></p>
<p>    That’s all! We are done with running one complete build in Travis CI with Docker. In the Build History section, we can see the whole history of test runs as below.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-20.png" alt="unnamed (20)" /></p>
<p>    We can s<a target="_blank" href="https://github.com/settings/tokens">ee the deployed artifacts in the</a> Releases section on GitHub as below.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-19.png" alt="unnamed (19)" /></p>
<p>    We can also run the tests in other browsers using the other browser containers. But, first, we ne<a target="_blank" href="https://github.com/settings/tokens">ed to update the before_instal</a>l phase with the relevant run command in the travi<a target="_blank" href="https://github.com/settings/tokens">s.yml file.</a></p>
<p>    Also need to update the -DgridHubURL in the maven command with a configured host port for each browser.</p>
<h3 id="heading-using-docker-compose"><strong>Using Docker Compose</strong></h3>
<p>    In our previous Travis CI and Docker example, we have used Docker standalone images from Selenium and started running our tests in Travis CI with Docker. However, if you would have observed, adding each Docker run command in the travis.yml file is not fool-proof, and we need to update the travis.yml file for all the changes required on the browser end.</p>
<p>    Instead, we could leverage the docker-compose utility tool from Docker itself. We can define all the browser-specific services required in a file called docker-compose.yml file and refer to it in the travis.yml file. It would really help us improve the readability of the build configuration file and separate the browser containers.</p>
<p>    Using docker-compose we can use a single command to activate all the services in a single go. Also, docker-compose files are very easy to write as they also use the YAML syntax. All the services in docker compose can even be stopped with a single command.</p>
<p>    We use the below docker-compose.yml file to define the services.</p>
<p>    So now the new travis.yml configuration looks as below. Update the travis.yml file in the project with this file, and commit and push the code to start the new build in Travis CI and Docker using the new build configuration.</p>
<p>    The only change here is that in the before_script phase, we are using docker-compose instead of running the direct docker commands. Instead, we use the curl utility to download the docker-compose file and save the content to a file named docker-compose.yml.</p>
<p>    Once that is completed, we use the docker-compose up -d statement to start all the services. This command starts all the services in the compose file and starts listening to the requests on the mentioned ports. Then we use the docker-compose ps command to see the services that are started. Finally, we use the same maven command to point our tests and execute them using the Travis CI build job.</p>
<p>    The below is the screenshot of the Travis CI logs, which shows that the docker-compose has successfully started the defined services.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-18.png" alt="unnamed (18)" /></p>
<p>    Build log details while running the tests on the Chrome browser with the below maven command.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-17.png" alt="unnamed (17)" /></p>
<p>    Build log details while running the tests on the Firefox browser with the below maven command.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-16.png" alt="unnamed (16)" /></p>
<p>    As you have observed, we are using the same port for running the tests in Chrome and Firefox browsers. It is because we are leveraging the hub-node architecture of the Selenium Grid here. In our earlier Travis CI and Docker example, where we are running the stand-alone container, we haven’t used the hub container.</p>
<h2 id="heading-run-selenium-tests-with-zalenium-and-travis-ci"><strong>Run Selenium Tests With Zalenium And Travis CI</strong></h2>
<p>    As per our previous examples of using Selenium, Travis CI, Docker images to run the tests, we can clearly see that we always need to update the versions as and when a new version is out. This becomes an overhead, as we need to spend a lot of time updating the configurations.</p>
<p>    If you search for an alternative, you might come across a solution called Zalenium. Zalenium is a flexible and scalable container-based Selenium Grid with video recording, live preview, basic auth &amp; dashboard. It provides all the latest versions of browser images, drivers, and tools required for Selenium test automation. It also has a provision to send the test requests to a third-party cloud provider if the browser requested is not available in the setup.</p>
<p>    Let’s see how we can use them together using Docker images and with Docker compose.</p>
<h3 id="heading-using-docker-images-1"><strong>Using Docker Images</strong></h3>
<p>    In order to use Zalenium, we need to pull the two images as specified below. Once those images are available, start the container using the Docker run command with the necessary options.</p>
<p>    Use the below travis.yml file to configure the build using Docker images for Zalenium.</p>
<p>    Also, we have made use of environment variables options available in Travis CI to dynamically pass the values at run time for TestNG XML file name, browser, and grid hub URL. We can set these values as below in the Settings section of the project.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-15.png" alt="unnamed (15)" /></p>
<p>    Once the execution is started, we can observe in the logs that the Travis CI system has read the environment variables and passed them to the running build.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-14.png" alt="unnamed (14)" /></p>
<p>    Next, the required Docker images are pulled from the central registry, and the Docker run command runs. Finally, the command docker ps shows the running containers at the moment.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-13.png" alt="unnamed (13)" /></p>
<p>    The required browser container is started on port 4444, and the tests will be executed successfully. We can see the test execution status as below.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-12.png" alt="unnamed (12)" /></p>
<p>    Similarly, we can change the browser value to GRID_FIREFOX and trigger a new build.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-11.png" alt="unnamed (11)" /></p>
<p>    Once the execution for GRID_FIREFOX is started, the Travis CI system will read the environment variables and pass them to the running build.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-10.png" alt="unnamed (10)" /></p>
<p>    The required browser container is started on port 4444, and the tests will be executed successfully. We can see the test execution status as below.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-9.png" alt="unnamed (9)" /></p>
<h3 id="heading-using-docker-compose-1"><strong>Using Docker Compose</strong></h3>
<p>    We can also leverage Docker compose and Zalenium here to quickly spin up a scalable container-based Selenium Grid and run our tests.</p>
<p>    The below is the docker-compose.yml file with the required services. Here we are starting two services, Selenium and Zalenium, and listening to the incoming requests on port 4444.</p>
<p>    The complete travis.yml file, in this case, is as below.</p>
<p>    Once the build has started, we can see the log details as below. After that, required services will be started, and the tests are executed.</p>
<p>    <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-8.png" alt="unnamed (8)" /></p>
<h2 id="heading-run-selenium-tests-with-selenium-cloud-using-travis-ci"><strong>Run Selenium Tests With Selenium Cloud using Travis CI</strong></h2>
<p>    From all our previous examples, it is evident that we can use Travis CI, Docker, and Selenium together to create automation regression pipelines with ease. As per our requirement, we can configure the declarative YAML files – travis.yml &amp; docker-compose.yml as per our requirement, trigger the build, and get the results. However, there is one drawback with all our previous examples.</p>
<p>    Let’s say if we want to see the regression execution live as it is happening, it is not possible, at least in the community version of Travis CI. On the enterprise version, along with some additional configuration, we could achieve this. Also, if we wanted to check the logs – driver logs, network logs, etc.-. we have to make changes to our examples that required additional effort again. Is there a solution to this problem? The answer is YES! We have LambdaTest cloud-based Selenium grid to our rescue.</p>
<p>    LambdaTest is a <a target="_blank" href="https://www.lambdatest.com/">cross browser testing cloud platf</a>orm to perform automation testing on 3000+ combinations of real browsers and operating systems. It is very easy to config<a target="_blank" href="https://www.lambdatest.com/">ure a project using L</a>ambdaTest and start leveraging the <a target="_blank" href="https://www.lambdatest.com/">benefits out of it. L</a>et’s integrate our project examples with the LambdaTest cloud grid and see how it’s working.</p>
<p>    We try to understand both options – using Docker images and starting the services using the docker-compose.</p>
<h3 id="heading-using-docker-images-2"><strong>Using Docker Images</strong></h3>
<p>    To start with the LambdaTest, first, navigate to the <a target="_blank" href="https://accounts.lambdatest.com/register">LambdaTest re</a><a target="_blank" href="https://accounts.lambdatest.com/register">gistration page and sign-up</a> for a new account. Once the LambdaTest account <a target="_blank" href="https://www.lambdatest.com/">is activated, go to t</a><a target="_blank" href="https://accounts.lambdatest.com/detail/profile">he P</a><a target="_blank" href="https://accounts.lambdatest.com/detail/profile">rofile&gt;</a> section and note the LambdaTest Us<a target="_blank" href="https://accounts.lambdatest.com/register">ername and LambdaTest Access</a> key. We will need these Lam<a target="_blank" href="https://accounts.lambdatest.com/register">bdaTest Username and Access</a> keys to <a target="_blank" href="https://accounts.lambdatest.com/detail/profile">execute</a> our tests on the LambdaTest platform.</p>
<p>    We use th<a target="_blank" href="https://accounts.lambdatest.com/detail/profile">e below</a> travis.yml file to run our tests with Travis CI and Docker on the LambdaTest cloud grid.</p>
<ul>
<li><p>Before triggering the build, we need to ensure that the below environment variables are set in the project settings in Travis CI.</p>
<p>  <code>LT_USERNAME - Username from the LambdaTest profile page.</code></p>
<p>  <code>LT_ACCESS_KEY - Access key obtained from the LambdaTest profile page.</code></p>
<p>  Below should be the form of gridHubURL.</p>
<p>  Where is the Username, is the Access key from LambdaTest account.</p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-7.png" alt="unnamed (7)" /></p>
<p>  Once the build starts, Travis CI reads the configured environment variables.</p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-6.png" alt="unnamed (6)" /></p>
<p>  As per the travis.yml file configuration, the respective Docker images are pulled, and containers are started.</p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-5-1.png" alt="unnamed (5)" /></p>
<p>  We are using the Docker run command with the flag lambdatestEnabled as true. Because of this, Zalenium starts the hub on port 4444 and one custom node using docker-selenium. A cloud proxy node is registered to the grid since we enabled the cloud integration with the LambdaTest platform. Once the test request is received, it will be sent to the available node with all the capabilities to execute the tests.</p>
<p>  Once Travis CI completes the Selenium test automation execution, we can navigate to the <a target="_blank" href="https://automation.lambdatest.com/timeline/">Automatio</a><a target="_blank" href="https://automation.lambdatest.com/timeline/">n dashboard of Lambd</a>aTest to see our test execution in the Timeline <a target="_blank" href="https://automation.lambdatest.com/timeline/">view. Click on the t</a>est case; we would be navigated to the Automation logs scre<a target="_blank" href="https://automation.lambdatest.com/timeline/">en to see other deta</a>ils such as execution recording, logs, etc.</p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-4-1.png" alt="unnamed (4)" /></p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-3-1.png" alt="unnamed (3)" /></p>
<p>  Next, let’s do the same using the docker-com<a target="_blank" href="https://automation.lambdatest.com/timeline/">pose.</a></p>
</li>
<li><p><strong>Using Docker Compose</strong></p>
<p>  It’s pretty much the same as our previous examples with minor modifications. Here is the docker-compose.yml file we use to integrate our tests with the LambdaTest cloud grid.</p>
</li>
<li><p>The configuration is almost similar to previous Travis CI and Docker examples, except that we are configuring the lambdatestEnabled flag to true for the start command. And adding the environment variables for LambdaTest username and access keys using LT_USERNAME and LT_ACCESSKEY. Also, please make sure that the environment variables in the Travis CI project are still intact and available during the build.</p>
</li>
<li><p><img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-2-1.png" alt="unnamed (2)" /></p>
<p>  The build result would be similar as the services are created by docker-compose, and the test execution happens on the browser containers in LambdaTest.</p>
<h2 id="heading-parallel-testing-with-travis-ci-docker-and-lambdatest"><strong>Parallel Testing With Travis CI, Docker And LambdaTest</strong></h2>
<p>  Till now, we have been discussing running the Selenium UI tests sequentially using multiple options with Travis CI, Docker, and LambdaTest. But as the application enhances we will have more tests in the testing suite. And as the size of the testing suite is big, the time it takes to execute will also be more.</p>
<p>  If the execution time is more than we are not meeting the primary goal of test automation, we need to provide early feedback on the product quality to the development team. So how do we achieve this goal without compromising on the testing scope?</p>
<p>  The answer is to run the tests in parallel mode wherever feasible to reduce the execution time drastically.</p>
<p>  <a target="_blank" href="https://www.lambdatest.com/blog/what-is-parallel-testing-and-why-to-adopt-it/">Coming back to our discussion, let us</a> make some changes to the testng.xml file as below to support parallel test execution. Add the required c<a target="_blank" href="https://www.lambdatest.com/blog/what-is-parallel-testing-and-why-to-adopt-it/">hanges in a new file called testng-parallel.xml a</a>nd add <a target="_blank" href="https://www.lambdatest.com/blog/what-is-parallel-testing-and-why-to-adopt-it/">them to our project. We are using the parallel at</a>tribute of testNG with a value of tests and configuring the tests to run in parallel with a thread count of 2. As you observe, we are configuring the file to run our tests in Chrome and Firefox browsers from the LambdaTest cloud platform.</p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed1.png" alt="unnamed1" /></p>
<p>  If we go back to our LambdaTest automation dashboard, we can observe that two sessions are started as the job is executed. One session is for a chrome browser and one for a firefox browser. Also, we can observe that we are using 2 of 5 parallel sessions in the LambdaTest platform – which indicates that our tests are being executed in parallel mode.</p>
<p>  <img src="https://www.lambdatest.com/blog/wp-content/uploads/2021/08/unnamed-4.png" alt="unnamed" /></p>
<p>  Like this, we could play around with configurations available in TestNG, Docker, and LambdaTest and build efficient and robust Selenium test automation suites.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>  In this post, we have covered some of the important concepts in Travis CI and Docker to build the pipelines for performing the Selenium test automation. We have also discussed integrating our test suites with the LambdaTest cloud grid and leveraging it in our Selenium test automation efforts. and <a target="_blank" href="https://www.lambdatest.com/blog/how-professional-qa-implements-a-robust-ci-cd-pipeline/">build robust CI/CD pipelines using Travis CI, Docker, and LambdaTe</a>st. There are many more exciting features that you can try upon and explore.</p>
<p>  Let us know in the comments if you want to have another article on the same topic with advanced capabilities.</p>
<p>  <strong>Happy Testing and Travis-ci-ing!</strong></p>
<pre><code class="lang-dockerfile">  dist: trusty
  language: java

  jdk:
    - oraclejdk8

  before_script:
    - docker <span class="hljs-keyword">run</span><span class="bash"> -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-firefox:4.0.0-rc-1-prerelease-20210618</span>
    - docker ps

  script:
    - mvn clean install -DsuiteXmlFile=testng.xml -Dbrowser=GRID_FIREFOX -DgridHubURL=http://localhost:<span class="hljs-number">4444</span>/wd/hub

  cache:
    directories:
      - .autoconf
      - $HOME/.m2

  deploy:
    provider: releases
    api_key: ${api_key}
    skip_cleanup: true
    file: [ <span class="hljs-string">"target/surefire-reports/emailable-report.html"</span>,
            <span class="hljs-string">"target/surefire-reports/index.html"</span> ]
    on:
      all_branches: true
      tags: false
</code></pre>
<pre><code class="lang-bash">  mvn clean install -DsuiteXmlFile=testng.xml -Dbrowser=GRID_FIREFOX -DgridHubURL=http://localhost:4444/wd/hub
</code></pre>
<pre><code class="lang-java">  &lt;!DOCTYPE suite SYSTEM <span class="hljs-string">"http://testng.org/testng-1.0.dtd"</span>&gt;
  &lt;suite name=<span class="hljs-string">"All Test Suite"</span>&gt;
      &lt;test verbose=<span class="hljs-string">"2"</span> name=<span class="hljs-string">"travisci-selenium-docker-lambdatest"</span>&gt;
          &lt;parameter name=<span class="hljs-string">"browser"</span> value=<span class="hljs-string">"GRID_CHROME"</span>&gt;
              &lt;parameter name=<span class="hljs-string">"gridHubURL"</span>
                         value=<span class="hljs-string">"http://localhost:4444/wd/hub"</span>/&gt;
          &lt;/parameter&gt;
          &lt;classes&gt;
              &lt;<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">name</span></span>=<span class="hljs-string">"com.lambdatest.SeleniumTests"</span>&gt;
                  &lt;methods&gt;
                      &lt;include name=<span class="hljs-string">"verifyHeader1"</span>/&gt;
                      &lt;include name=<span class="hljs-string">"verifyHeader2"</span>/&gt;
                      &lt;include name=<span class="hljs-string">"verifyFirstElementBehavior"</span>/&gt;
                      &lt;include name=<span class="hljs-string">"verifySecondElementBehavior"</span>/&gt;
                      &lt;include name=<span class="hljs-string">"verifyAddButtonBehavior"</span>/&gt;
                  &lt;/methods&gt;
              &lt;/<span class="hljs-class"><span class="hljs-keyword">class</span>&gt;
          &lt;/<span class="hljs-title">classes</span>&gt;
      &lt;/<span class="hljs-title">test</span>&gt;
  &lt;/<span class="hljs-title">suite</span>&gt;</span>
</code></pre>
<pre><code class="lang-java">  <span class="hljs-keyword">package</span> com.lambdatest;

  <span class="hljs-keyword">import</span> org.openqa.selenium.By;
  <span class="hljs-keyword">import</span> org.openqa.selenium.WebElement;
  <span class="hljs-keyword">import</span> org.testng.annotations.AfterTest;
  <span class="hljs-keyword">import</span> org.testng.annotations.BeforeTest;
  <span class="hljs-keyword">import</span> org.testng.annotations.Test;

  <span class="hljs-keyword">import</span> <span class="hljs-keyword">static</span> org.assertj.core.api.Assertions.assertThat;

  <span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SeleniumTests</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">BaseTest</span> </span>{

      <span class="hljs-meta">@BeforeTest</span>
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">setUp</span><span class="hljs-params">()</span> </span>{
          driver.get(<span class="hljs-string">"https://lambdatest.github.io/sample-todo-app/"</span>);
      }

      <span class="hljs-meta">@Test(priority = 1)</span>
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">verifyHeader1</span><span class="hljs-params">()</span> </span>{
          String headerText = driver.findElement(By.xpath(<span class="hljs-string">"//h2"</span>)).getText();
          assertThat(headerText).isEqualTo(<span class="hljs-string">"LambdaTest Sample App"</span>);
      }

      <span class="hljs-meta">@Test(priority = 2)</span>
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">verifyHeader2</span><span class="hljs-params">()</span> </span>{
          String text = driver.findElement(By.xpath(<span class="hljs-string">"//h2/following-sibling::div/span"</span>)).getText();
          assertThat(text).isEqualTo(<span class="hljs-string">"5 of 5 remaining"</span>);
      }

      <span class="hljs-meta">@Test(priority = 3)</span>
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">verifyFirstElementBehavior</span><span class="hljs-params">()</span> </span>{
          WebElement firstElementText = driver.findElement(By.xpath(<span class="hljs-string">"//input[@name='li1']/following-sibling::span[@class='done-false']"</span>));
          assertThat(firstElementText.isDisplayed()).isTrue();
          assertThat(firstElementText.getText()).isEqualTo(<span class="hljs-string">"First Item"</span>);

          assertThat(driver.findElement(By.name(<span class="hljs-string">"li1"</span>)).isSelected()).isFalse();
          driver.findElement(By.name(<span class="hljs-string">"li1"</span>)).click();
          assertThat(driver.findElement(By.name(<span class="hljs-string">"li1"</span>)).isSelected()).isTrue();

          WebElement firstItemPostClick = driver.findElement(By.xpath(<span class="hljs-string">"//input[@name='li1']/following-sibling::span[@class='done-true']"</span>));
          assertThat(firstItemPostClick.isDisplayed()).isTrue();

          String text = driver.findElement(By.xpath(<span class="hljs-string">"//h2/following-sibling::div/span"</span>)).getText();
          assertThat(text).isEqualTo(<span class="hljs-string">"4 of 5 remaining"</span>);
      }

      <span class="hljs-meta">@Test(priority = 4)</span>
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">verifySecondElementBehavior</span><span class="hljs-params">()</span> </span>{
          WebElement secondElementText = driver.findElement(By.xpath(<span class="hljs-string">"//input[@name='li2']/following-sibling::span[@class='done-false']"</span>));
          assertThat(secondElementText.isDisplayed()).isTrue();
          assertThat(secondElementText.getText()).isEqualTo(<span class="hljs-string">"Second Item"</span>);

          assertThat(driver.findElement(By.name(<span class="hljs-string">"li2"</span>)).isSelected()).isFalse();
          driver.findElement(By.name(<span class="hljs-string">"li2"</span>)).click();
          assertThat(driver.findElement(By.name(<span class="hljs-string">"li2"</span>)).isSelected()).isTrue();

          WebElement secondItemPostClick = driver.findElement(By.xpath(<span class="hljs-string">"//input[@name='li2']/following-sibling::span[@class='done-true']"</span>));
          assertThat(secondItemPostClick.isDisplayed()).isTrue();

          String text = driver.findElement(By.xpath(<span class="hljs-string">"//h2/following-sibling::div/span"</span>)).getText();
          assertThat(text).isEqualTo(<span class="hljs-string">"3 of 5 remaining"</span>);
      }

      <span class="hljs-meta">@Test(priority = 5)</span>
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">verifyAddButtonBehavior</span><span class="hljs-params">()</span> </span>{
          driver.findElement(By.id(<span class="hljs-string">"sampletodotext"</span>)).clear();
          driver.findElement(By.id(<span class="hljs-string">"sampletodotext"</span>)).sendKeys(<span class="hljs-string">"Yey, Let's add it to list"</span>);
          driver.findElement(By.id(<span class="hljs-string">"addbutton"</span>)).click();
          WebElement element = driver.findElement(By.xpath(<span class="hljs-string">"//input[@name='li6']/following-sibling::span[@class='done-false']"</span>));
          assertThat(element.isDisplayed()).isTrue();
          assertThat(element.getText()).isEqualTo(<span class="hljs-string">"Yey, Let's add it to list"</span>);
      }

      <span class="hljs-meta">@AfterTest</span>
      <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">teardown</span><span class="hljs-params">()</span> </span>{
          <span class="hljs-keyword">if</span> (driver != <span class="hljs-keyword">null</span>) {
              driver.quit();
          }
      }
  }
</code></pre>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How To Build CI/CD Pipeline With TeamCity For Selenium Test Automation]]></title><description><![CDATA[This article was originally published on LambdaTest's official blog.
Continuous Integration/Continuous Deployment (CI/CD) has become an essential part of modern software development cycles. As a part of continuous integration, the developer should en...]]></description><link>https://blog.rakeshvardan.com/how-to-build-cicd-pipeline-with-teamcity-for-selenium-test-automation</link><guid isPermaLink="true">https://blog.rakeshvardan.com/how-to-build-cicd-pipeline-with-teamcity-for-selenium-test-automation</guid><category><![CDATA[selenium]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[automation testing ]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Wed, 06 Sep 2023 03:36:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/ZGjbiukp_-A/upload/ef23c637ecb9690e12d2068d1f1de9e4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was originally published on LambdaTest's official</em> <a target="_blank" href="https://www.lambdatest.com/blog/ci-cd-pipeline-with-teamcity-for-selenium-test-automation/"><em>blog</em></a><em>.</em></p>
<p><a target="_blank" href="https://www.lambdatest.com/blog/what-is-continuous-integration-and-continuous-delivery/">Continuous Integration/Continuous Deployment</a> (CI/CD) has become an essential part of modern software development cycles. As a part of continuous integration, the developer should ensure that the Integration does not break the existing code because this could lead to a negative impact on the overall quality of the project. In order to show how the integration process works, we’ll take an example of a well-known continuous integration tool, <a target="_blank" href="https://www.jetbrains.com/teamcity/">TeamCity</a>. In this article, we will learn TeamCity concepts and integrate our test suites with TeamCity for <a target="_blank" href="https://www.lambdatest.com/automation-testing">test automation</a> by leveraging LambdaTest's <a target="_blank" href="https://www.lambdatest.com/selenium-automation">cloud-based Selenium grid</a>.</p>
<p>There are numerous <a target="_blank" href="https://www.lambdatest.com/learning-hub/cicd">best CI/CD tools</a> available for building high-quality code and narrowing the gap between development and impacted teams. Besides establishing a DevOps culture in the organizations, teams enhance it by implementing <a target="_blank" href="https://www.lambdatest.com/blog/16-best-practices-of-ci-cd-pipeline-to-speed-test-automation/">best CI/CD practices</a> throughout the Software Development Life Cycle(SDLC). These practices help the teams accelerate product development, automate processes, and improve overall productivity.</p>
<p>Please head over to the original article for detailed information.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.lambdatest.com/blog/ci-cd-pipeline-with-teamcity-for-selenium-test-automation/">https://www.lambdatest.com/blog/ci-cd-pipeline-with-teamcity-for-selenium-test-automation/</a></div>
]]></content:encoded></item><item><title><![CDATA[How I prepared for Microsoft Azure Fundamentals - AZ 900 exam ?]]></title><description><![CDATA[Microsoft Azure is one of the leading providers in the cloud computing market since many years. Azure  is Microsoft's public cloud offering that is spanned across many regions and offers wide variety of services for customers. I got a chance to explo...]]></description><link>https://blog.rakeshvardan.com/how-i-prepared-for-microsoft-azure-fundamentals-az-900-exam</link><guid isPermaLink="true">https://blog.rakeshvardan.com/how-i-prepared-for-microsoft-azure-fundamentals-az-900-exam</guid><category><![CDATA[Azure]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[Certification]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Fri, 05 Feb 2021 14:51:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1612536138139/5yk8ejsUQ.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a target="_blank" href="https://azure.microsoft.com/en-in/">Microsoft Azure</a></strong> is one of the leading providers in the cloud computing market since many years. <strong>Azure </strong> is Microsoft's public cloud offering that is spanned across many regions and offers wide variety of services for customers. I got a chance to explore Azure platform as part of my <strong>AZ-900</strong> certification preparation. In this post, I would like to explain my preparation plan that helped me to pass the exam and helped me to get an overview of the <strong>Azure </strong> itself.</p>
<h2 id="introduction">Introduction</h2>
<p>As per <em>Microsoft's</em> official <a target="_blank" href="https://docs.microsoft.com/en-us/learn/certifications/exams/az-900?source=learn#certification-exams">documentation</a>, candidates for this exam should have foundational knowledge of cloud services and how those services are provided with <strong>Microsoft Azure</strong>. The exam is intended for candidates who are just beginning to work with cloud-based solutions and services or are new to <strong>Azure</strong>. The exam skills outline is as below:</p>
<ul>
<li>Describe cloud concepts</li>
<li>Describe core Azure services</li>
<li>Describe core solutions and management tools on Azure</li>
<li>Describe general security and network security features</li>
<li>Describe identity, governance, privacy, and compliance features</li>
<li>Describe Azure cost management and Service Level Agreements</li>
</ul>
<blockquote>
<p>Microsoft has recently changed the content of <strong>AZ-900</strong> exam in November, 2020. So always refer the official documentation for syllabus and other details.</p>
</blockquote>
<p>Let me explain my preparation plan and the materials I have covered.</p>
<h2 id="preparation">Preparation</h2>
<p>I already have some exposure towards using cloud technologies, most specifically AWS and Google Cloud. I am a Google certified Cloud Engineer as well as Cloud Architect. Hence I am already equipped with the cloud concepts and services being used. This preparation has given me an opportunity to explore Azure and understand different services provided.</p>
<h3 id="1-microsoft-official-learning-path-for-az-900">1. Microsoft official learning path for AZ-900</h3>
<p>I started with following the Microsoft's official learning path for <a target="_blank" href="https://docs.microsoft.com/en-us/learn/certifications/exams/az-900?source=learn#certification-exams">AZ-900</a></p>
<p>The below are the step-wise tutorials:</p>
<ul>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/learn/paths/az-900-describe-cloud-concepts/">Azure Fundamentals part 1: Describe core Azure concepts</a></li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/learn/paths/az-900-describe-core-azure-services/">Azure Fundamentals part 2: Describe core Azure services</a></li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/learn/paths/az-900-describe-core-solutions-management-tools-azure/">Azure Fundamentals part 3: Describe core solutions and management tools on Azure</a></li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/learn/paths/az-900-describe-general-security-network-security-features/">Azure Fundamentals part 4: Describe general security and network security features</a></li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/learn/paths/az-900-describe-identity-governance-privacy-compliance-features/">Azure Fundamentals part 5: Describe identity, governance, privacy, and compliance features</a></li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/learn/paths/az-900-describe-azure-cost-management-service-level-agreements/">Azure Fundamentals part 6: Describe Azure cost management and service level agreements</a></li>
</ul>
<h4 id="advantages">Advantages:</h4>
<ul>
<li>Since this is a tutorial we have the flexibility to complete at our own pace</li>
<li>The quizzes in each section test our knowledge on each topic</li>
<li>With the help of detailed examples and use cases, we can understand the concepts easily</li>
<li>Each topic is focusing on one area of the Azure platform</li>
<li>Some of the tutorials also give us practical learning experience with the pre-configured azure services</li>
</ul>
<h3 id="2-microsoft-azure-fundamentals-certification-free-course-az-900-by-exam-pro">2. Microsoft Azure Fundamentals Certification Free Course (AZ-900) by Exam Pro</h3>
<p>After getting acquainted with the basics of <strong>Azure</strong> platform and services, I have completed the below video course from Exam Pro. </p>
<ul>
<li><a target="_blank" href="https://www.youtube.com/watch?v=NKEFWyqJ5XA">AZ 900 Free Course by Exam Pro</a></li>
</ul>
<h4 id="advantages">Advantages:</h4>
<ul>
<li><a target="_blank" href="https://twitter.com/andrewbrown">Andrew Brown</a> - the author of the course has covered most of the exam syllabus in just 3 hours!</li>
<li>It can be a refresher before the exam</li>
<li>Exam guide walkthrough</li>
</ul>
<h3 id="3-self-practice">3. Self-Practice</h3>
<p>Next step in my preparation plan is to have some hands-on experience using services provided by Azure. I have created a free azure account and explored different services such as <em>Azure Virtual machines, App functions, different SQL database services</em> etc. This really gave me a feel of using that specific service with the knowledge I got from my theoretical lessons mentioned above.</p>
<h3 id="4-exam-topics">4. Exam Topics</h3>
<p>Finally, I have went through some of the practice exams on AZ-900 to get a feel of the exam and questions. Notable resource in this area would be <a target="_blank" href="https://www.examtopics.com/exams/microsoft/az-900/view/">Exam Topics</a> website. There are 150+ free practice questions for this exam. One thing to remember is that the answer mentioned in the website may not always be correct. We need to go through all the discussions section for each question, use our knowledge and come to a conclusion on the answer.</p>
<h3 id="other-materials">Other materials:</h3>
<p>I have also used some of the other materials/blogs as mentioned below:</p>
<ul>
<li><a target="_blank" href="https://github.com/ddneves/awesome-azure-learning">Awesome Azure Github repo</a></li>
<li><a target="_blank" href="https://vladtalkstech.com/az-900-study-guide-microsoft-azure-fundamentals">Study Guide by Vlad</a></li>
</ul>
<h2 id="exam-overview-and-experience">Exam overview and experience</h2>
<p>I have completed all the above steps as discussed. It took me around one week of time(working full-time) even being a certified cloud engineer by other provider(GCP). I have practiced a lot to get more hands-on experience on many services.</p>
<p>I have chosen to appear for the exam at a Pearson exam center near to my location</p>
<ul>
<li>I have got 41 questions(Multiple choice, multi-select, drag-drop) that need to be answered in 90 minutes</li>
<li>Since this is an entry level exam - I feel that the exam is easy, given good preparation. So you need to manage time efficiently as per your preparation</li>
<li>There are no questions on using commands with Azure CLI</li>
<li>After submitting the exam we get a PASS/FAIL result with the marks achieved out of 1000. Microsoft also gives a detailed overview of the percentage of marks obtained in each of section. We can also take a print of that, if needed</li>
<li>Immediately we also receive a confirmation email with a certificate</li>
</ul>
<p>You may have a look at my certificate <a target="_blank" href="https://www.youracclaim.com/badges/5c211403-a9fb-4e2f-8281-df57dfecfde4?source=linked_in_profile">here</a></p>
<h2 id="conclusion">Conclusion</h2>
<p>Thank you for reading this far. I hope my experience is helpful for those who are trying to apply and prepare for <strong>AZ-900</strong> exam.</p>
<p>Wishing you all the best for your certification journey.</p>
]]></content:encoded></item><item><title><![CDATA[Tech bytes# Running Jenkins in a Docker Container]]></title><description><![CDATA[Would you like to use Jenkins in your machine and don't want to follow the traditional installation process ? If your say YES - then follow along this brief article !
Prerequisites:
This article assumes you have some idea of what docker is and how to...]]></description><link>https://blog.rakeshvardan.com/tech-bytes-running-jenkins-in-a-docker-container</link><guid isPermaLink="true">https://blog.rakeshvardan.com/tech-bytes-running-jenkins-in-a-docker-container</guid><category><![CDATA[Jenkins]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[Continuous Integration]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Fri, 29 Jan 2021 03:17:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611562182114/pQSLmoaEf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Would you like to use Jenkins in your machine and don't want to follow the traditional installation process ? If your say YES - then follow along this brief article !</p>
<h3 id="prerequisites">Prerequisites:</h3>
<p>This article assumes you have some idea of what <em>docker</em> is and how to use it. The pre-requisites to get started are:</p>
<ul>
<li><code>Docker</code> running in your machine</li>
</ul>
<blockquote>
<p>If you are completely new to docker, I would highly recommend starting from <a target="_blank" href="https://www.docker.com/get-started">here</a>.</p>
</blockquote>
<h3 id="installation-steps">Installation Steps:</h3>
<ul>
<li>Create a folder called <strong>build</strong> and add a <code>Dockerfile</code> in it with content as below.</li>
</ul>
<pre><code># build/Dockerfile
<span class="hljs-keyword">FROM</span> jenkins/jenkins:lts-alpine

<span class="hljs-keyword">USER</span> root 
RUN apk <span class="hljs-keyword">add</span> docker
</code></pre><ul>
<li>Create a <code>docker-compose.yml</code> as below:</li>
</ul>
<pre><code><span class="hljs-comment">#docker-compose.yml</span>

<span class="hljs-attr">version:</span> <span class="hljs-string">"3.8"</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">jenkins:</span>
    <span class="hljs-attr">build:</span> <span class="hljs-string">build/</span>
    <span class="hljs-attr">ports:</span> 
      <span class="hljs-bullet">-</span> <span class="hljs-number">8090</span><span class="hljs-string">:8080</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"jenkins.data:/var/jenkins_home"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"/var/run/docker.sock:/var/run/docker.sock"</span>

<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">jenkins.data:</span>
</code></pre><ul>
<li>Run the below command to start the service:</li>
</ul>
<pre><code>docker-compose up -d
</code></pre><ul>
<li>Open the browser at:</li>
</ul>
<pre><code><span class="hljs-attribute">localhost</span>:<span class="hljs-number">8090</span>
</code></pre><ul>
<li>We get a jenkins screens as below for the initial administrator password </li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611555641082/nJXPFP65x.png" alt="image.png" /></p>
<ul>
<li>Get the password from:</li>
</ul>
<pre><code>docker-compose <span class="hljs-keyword">exec</span> jenkins cat /var/jenkins_home/secrets/initialAdminPassword
</code></pre><ul>
<li>After entering the password from above step, Jenkins shows the plugin selection screen.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611555746384/_YNUE0SJN.png" alt="Jenkins Plugin Screen" /></p>
<p>After the successful installation of the required plugins you will be redirected to Jenkins Dashboard page. You may create an admin user if required.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611561981662/16REaAS6p.png" alt="Jenkins Dashboard" /></p>
<p>That's it. You can use Jenkins now to create and run jobs as long as the container is running :) </p>
]]></content:encoded></item><item><title><![CDATA[How I prepared for Google Cloud Professional Architect exam?]]></title><description><![CDATA[Certifications are definitely a great way to enhance credibility, self-image and they encourage life-long learning and professional development. Last year(2020!) I have started my certification journey on cloud technologies. I could be able to get my...]]></description><link>https://blog.rakeshvardan.com/how-i-prepared-for-google-cloud-professional-architect-exam</link><guid isPermaLink="true">https://blog.rakeshvardan.com/how-i-prepared-for-google-cloud-professional-architect-exam</guid><category><![CDATA[GCP]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[Certification]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Sun, 24 Jan 2021 16:55:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611505316723/8pN6HzGc4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Certifications are definitely a great way to enhance credibility, self-image and they encourage life-long learning and professional development. Last year(2020!) I have started my certification journey on cloud technologies. I could be able to get my first certification from <strong>Google Cloud(GCP)</strong> as a <strong>Cloud Engineer</strong>. Recently I have also completed my <strong>Cloud Architect </strong>certification from GCP. In this article, I would like to share all my experiences about my preparation and the exam itself.</p>
<h3 id="introduction">Introduction</h3>
<p>As per Google Cloud's official <a target="_blank" href="https://cloud.google.com/certification/cloud-architect">documentation</a>,</p>
<blockquote>
<p>A Professional Cloud Architect enables organizations to leverage Google Cloud technologies. With a thorough understanding of cloud architecture and Google Cloud Platform, this individual can design, develop, and manage robust, secure, scalable, highly available, and dynamic solutions to drive business objectives.</p>
</blockquote>
<p>The Google Cloud Architect should be able to:</p>
<ul>
<li>Design and plan a cloud solution architecture</li>
<li>Manage and provision the cloud solution infrastructure</li>
<li>Design for security and compliance</li>
<li>Analyze and optimize technical and business processes</li>
<li>Manage implementations of cloud architecture</li>
<li>Ensure solution and operations reliability</li>
</ul>
<h3 id="how-its-different-from-a-cloud-engineer-exam">How it's different from a <strong>Cloud Engineer </strong>exam ?</h3>
<p><strong>Cloud Engineer</strong> exam focusses on the tasks that cloud engineers perform such as - creating virtual machines, configuring instance groups, assigning roles to identities, monitoring the VM's and so on. This exam is more likely to have detailed questions about commands using <em>gcloud, gsutil</em> and <em>bq</em>.</p>
<p>Where as the <strong>Cloud architect</strong> exam is focused more on the candidate's ability to perform tasks such as - identifying which storage option is best, designing an architecture that meets necessary regulatory requirements, or understanding the implications of horizontally scaling a database. Architects should also be familiar with the command options, but a detailed knowledge on the command options is not necessary. As per my experience it is unlikely that we get questions on using the GCP commands in the exam.</p>
<p>Let me explain my detailed preparation plan for the <strong>Cloud Architect</strong> exam. </p>
<blockquote>
<p>I also have shared similar experiences about Cloud Engineer exam <a target="_blank" href="https://rakeshvardan.hashnode.dev/how-i-prepared-for-google-cloud-certified-associate-cloud-engineer-exam">here</a>.</p>
</blockquote>
<h3 id="1-google-learning-path-from-coursera">1. Google Learning path from Coursera</h3>
<p>I have started my preparation with the <em>Google</em> recommended official learning path as below. If followed in the same order, it gives you a very good understanding of different services that GCP provides and role of an architect in leveraging the services for building solutions.</p>
<ul>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/course_templates/1?mkt_tok=eyJpIjoiWmpZMVltSmhOMlk1WlRJdyIsInQiOiJ4UVpKR1ZjeWk4b2oxcTN3M3FVMENxT1dxTGJjaHd3T0hNODlzVjlGbFUzNURcL0l1UTVVN1ducUk3cmM1QlwvblpHY0czUVRBZE5YbVg3TFNxWFpMejRSU1FnNDJLRmw0SjEwTlwvSUVkVmR5NWJHUFZJazJsbkQwNkZlTktsXC9ZcHgifQ%3D%3D">Google Cloud Platform Fundamentals: Core Infrastructure</a></li>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/course_templates/50">Essential Google Cloud Infrastructure: Foundation</a></li>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/course_templates/49?mkt_tok=eyJpIjoiWmpZMVltSmhOMlk1WlRJdyIsInQiOiJ4UVpKR1ZjeWk4b2oxcTN3M3FVMENxT1dxTGJjaHd3T0hNODlzVjlGbFUzNURcL0l1UTVVN1ducUk3cmM1QlwvblpHY0czUVRBZE5YbVg3TFNxWFpMejRSU1FnNDJLRmw0SjEwTlwvSUVkVmR5NWJHUFZJazJsbkQwNkZlTktsXC9ZcHgifQ%3D%3D">Essential Google Cloud Infrastructure: Core Services</a></li>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/course_templates/51?mkt_tok=eyJpIjoiWmpZMVltSmhOMlk1WlRJdyIsInQiOiJ4UVpKR1ZjeWk4b2oxcTN3M3FVMENxT1dxTGJjaHd3T0hNODlzVjlGbFUzNURcL0l1UTVVN1ducUk3cmM1QlwvblpHY0czUVRBZE5YbVg3TFNxWFpMejRSU1FnNDJLRmw0SjEwTlwvSUVkVmR5NWJHUFZJazJsbkQwNkZlTktsXC9ZcHgifQ%3D%3D">Elastic Google Cloud Infrastructure: Scaling and Automation</a></li>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/course_templates/41">Reliable Google Cloud Infrastructure: Design and Process</a></li>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/course_templates/78?mkt_tok=eyJpIjoiWmpZMVltSmhOMlk1WlRJdyIsInQiOiJ4UVpKR1ZjeWk4b2oxcTN3M3FVMENxT1dxTGJjaHd3T0hNODlzVjlGbFUzNURcL0l1UTVVN1ducUk3cmM1QlwvblpHY0czUVRBZE5YbVg3TFNxWFpMejRSU1FnNDJLRmw0SjEwTlwvSUVkVmR5NWJHUFZJazJsbkQwNkZlTktsXC9ZcHgifQ%3D%3D">Preparing for the Google Cloud Professional Cloud Architect Exam</a></li>
</ul>
<blockquote>
<p>There is an overlap between the Cloud Engineer and Cloud Architect exams. Hence you would have already completed some of the above courses as part of Cloud Engineer exam preparation.</p>
</blockquote>
<p>If you have some prior experience working with AWS then you may optionally take a look at the below courses. This course enables you to make a smoother transition from AWS comparing the services from both platforms:</p>
<ul>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/course_templates/38?mkt_tok=eyJpIjoiWmpZMVltSmhOMlk1WlRJdyIsInQiOiJ4UVpKR1ZjeWk4b2oxcTN3M3FVMENxT1dxTGJjaHd3T0hNODlzVjlGbFUzNURcL0l1UTVVN1ducUk3cmM1QlwvblpHY0czUVRBZE5YbVg3TFNxWFpMejRSU1FnNDJLRmw0SjEwTlwvSUVkVmR5NWJHUFZJazJsbkQwNkZlTktsXC9ZcHgifQ%3D%3D">Google Cloud Platform Fundamentals for AWS Professionals</a></li>
</ul>
<p>Suggested Hands-On Labs from <a target="_blank" href="https://www.qwiklabs.com">Qwiklabs</a> are:</p>
<ul>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/quests/24?mkt_tok=eyJpIjoiWmpZMVltSmhOMlk1WlRJdyIsInQiOiJ4UVpKR1ZjeWk4b2oxcTN3M3FVMENxT1dxTGJjaHd3T0hNODlzVjlGbFUzNURcL0l1UTVVN1ducUk3cmM1QlwvblpHY0czUVRBZE5YbVg3TFNxWFpMejRSU1FnNDJLRmw0SjEwTlwvSUVkVmR5NWJHUFZJazJsbkQwNkZlTktsXC9ZcHgifQ%3D%3D">Cloud Architecture</a></li>
<li><a target="_blank" href="https://googlecourses.qwiklabs.com/quests/47">Cloud Architecture - Design, Implement, and Manage</a></li>
</ul>
<h4 id="advantages">Advantages:</h4>
<ul>
<li>Flexibility to complete at our own pace</li>
<li>The quizzes in each section test our knowledge on each topic</li>
<li>Topic wise slides and official documentation references will be provided</li>
<li>Each course is focusing on one area of the GCP platform</li>
<li>Qwiklabs quests gives us a great practical learning experience</li>
</ul>
<h3 id="2-google-cloud-architect-study-guide-from-dan-sullivan">2. Google Cloud Architect Study Guide from Dan Sullivan</h3>
<p>I already have experiences using Dan Sullivan's study guides on GCP. The study guide on <strong>Cloud Engineer</strong> exam has helped me immensely to understand all the basics of GCP. It is one of the key factor for my success in exam as well as my experience with using GCP. So, selecting the similar <a target="_blank" href="https://www.amazon.in/Google-Professional-Cloud-Architect-Study/dp/1119602440/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;qid=&amp;sr=">guide</a> for <strong>Cloud Architect</strong> exam was also an obvious choice for me. I really like his writing style and level of details.</p>
<p>This book covers all the exam objectives - enabling us to design network, storage, and compute resources; meet all business and technical requirements; design for security and compliance, plan migrations; and much more! It is fully equipped with numerous practice questions for each section which helps us to prepare for our success in the exam.</p>
<h4 id="advantages">Advantages:</h4>
<ul>
<li>Covers all the exam objectives</li>
<li>Review questions at the end of each chapter &amp; one practice test</li>
<li>Detailed explanation about the <a target="_blank" href="https://cloud.google.com/certification/guides/professional-cloud-architect">sample case studies</a></li>
<li>Clearly distinguish the design architectures for business and technical requirements</li>
<li>Explains about the overall SRE(Software Reliability Engineering) ideology</li>
<li>Discusses about cloud migration approaches</li>
</ul>
<h3 id="3-qwiklabs">3. Qwiklabs</h3>
<p>After acquiring the theoretical knowledge I started working on different quests in <a target="_blank" href="https://www.qwiklabs.com">Qwiklabs</a>. This has really helped to practice more in real-time and gave me confidence on using different GCP services. I highly suggest the <a target="_blank" href="https://chriskyfung.github.io/blog/qwiklabs/Qwiklabs-User-Tips-for-Learning_Google_Cloud_Platform">Visual Map of Qwiklabs GCP Quests</a> created and maintained by <strong>Chris F</strong>. This article has a map of different quests that are available in GCP. It really helps the beginners to focus on respective areas and learn accordingly. As I already have got some experience using GCP, I tried to complete only the quests where I need more hands-on and deep understanding.</p>
<h4 id="advantages">Advantages:</h4>
<ul>
<li>Practical experience !!!</li>
</ul>
<h3 id="4-awesome-gcp-videos-by-sathish-vj">4. Awesome GCP videos by Sathish VJ</h3>
<p>Sathish VJ has put a lot of effort in creating the video playlists for most of the GCP exams. I have completed his <a target="_blank" href="https://www.youtube.com/watch?v=iNJe_NrbijM&amp;list=PLQMsfKRZZviTIxEh0pkWNwnDUasGVZS4n">playlist on Cloud Architect</a> for this exam. As part of these videos Sathish takes the questions from official practice set and discusses on the concept as well as answering part. The logical answering approaches towards any kind of question is really important for the exam - as the question may come from any corner of the whole GCP services. I would suggest any GCP aspirant to go through the videos before attempting any exam.</p>
<p>There is also an active <a target="_blank" href="https://github.com/sathishvj/awesome-gcp-certifications">GitHub repo</a> for all the information related to GCP. This can the single point of source of information for anything on GCP.</p>
<h4 id="advantages">Advantages:</h4>
<ul>
<li>Can be a refresher before the exam</li>
<li>Tips on finding the best answers</li>
</ul>
<h3 id="5-google-sre-book">5. Google SRE book</h3>
<p>Google has published a series of books on explaining how it maintains and runs its production systems(Gmail, YouTube etc.). <a target="_blank" href="https://sre.google/sre-book/table-of-contents/">Site Reliability Engineering - SRE</a> is one such book which is really helpful for understanding the overall process behind SRE ideology and relevant practices. The books are available online for free reading and it really contains some of the best ideas and methods for architecting a solution.</p>
<h4 id="advantages">Advantages:</h4>
<ul>
<li>Best SRE practices</li>
<li>Available online for free reading</li>
</ul>
<h3 id="exam-overview-and-experience">Exam overview and experience</h3>
<p>I have completed all the above steps as discussed. It took me around two months of time(working full-time) even being a certified cloud engineer. I have practiced a lot to get more hands-on experience on many services. </p>
<p>I have chosen to appear for the exam at a Kriterion exam center near to my location.</p>
<ul>
<li>The exam consists of 50 questions(Multiple choice &amp; multi-select) that need to be answered in 120 minutes.</li>
<li>The exam pattern is mostly similar to Cloud Engineer exam - the only difference is that we get some questions from the sample case studies(<em>Mountkirk Games,Dress4Win and TerramEarth</em>).</li>
<li>Questions are bit complex and contains huge information as compared to Cloud engineer exam. So you need to manage time efficiently.</li>
<li>There are very less number of questions on using commands(at least for me there is no question !)</li>
<li>After submitting the exam we get a preliminary PASS/FAIL result.</li>
<li>If passed, Google will send a confirmation email with in a week time along with a certificate.</li>
</ul>
<blockquote>
<p>For all professional level certifications, Google also send us a code to purchase any merchandise from it's <a target="_blank" href="https://shop.googlemerchandisestore.com/signin.html?vid=20180201712&amp;loginway=header">store</a>.</p>
</blockquote>
<p>You may have a look at my certificate <a target="_blank" href="https://www.credential.net/34b28b63-4dbc-4e64-8431-433a8463c979?key=a9d106a72e1d710dab291354824c14ab3faa33db826d9c947c5d0e00a1218c9b#gs.rkmfqk">here</a></p>
<h3 id="conclusion">Conclusion</h3>
<p>Hope my experience helps anyone trying to apply for the <strong>Cloud Architect</strong> exam. Wishing you all the best for your certification journey.</p>
<p>Thank you !</p>
]]></content:encoded></item><item><title><![CDATA[Getting started with deploying PostgreSQL on Docker container]]></title><description><![CDATA[Have you ever felt that installing software is hard? It might become complex because of missing dependencies of the complex applications and many configurations around it. It requires lot of effort from developer side, takes time to check the errors,...]]></description><link>https://blog.rakeshvardan.com/getting-started-with-deploying-postgresql-on-docker-container</link><guid isPermaLink="true">https://blog.rakeshvardan.com/getting-started-with-deploying-postgresql-on-docker-container</guid><category><![CDATA[Docker]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Rakesh Vardan]]></dc:creator><pubDate>Mon, 28 Sep 2020 07:48:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1601286598782/--1D0TxYO.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever felt that installing software is hard? It might become complex because of missing dependencies of the complex applications and many configurations around it. It requires lot of effort from developer side, takes time to check the errors, resolve the conflicts and to have the software up &amp; running. <a target="_blank" href="https://www.docker.com/">Docker</a> provides a simple way of installing and running the applications with an easy to follow API. In this article we are going to understand the process of installing and running the <code>postgreSQL</code> using docker. You can follow along these steps for having <code>postgres</code> installed for your local development as well as remote server setup.</p>
<h2 id="contents">Contents</h2>
<p><a class="post-section-overview" href="#introduction">Introduction</a></p>
<p><a class="post-section-overview" href="#pre-requisites">Pre-requisites</a></p>
<p><a class="post-section-overview" href="#option-1-running-postgres-with-direct-docker-commands">Option - 1: Running Postgres with direct docker commands</a></p>
<p><a class="post-section-overview" href="#option-2-running-postgres-with-docker-compose">Option - 2: Running Postgres with docker compose</a></p>
<p><a class="post-section-overview" href="#running-multiple-instances-of-postgres-in-the-same-host">Running multiple instances of postgres in the same host</a></p>
<p><a class="post-section-overview" href="#conclusion">Conclusion</a></p>
<h2 id="introduction">Introduction</h2>
<p>PostgreSQL is an open-source, object-relational database management system (ORDBMS). It is also commonly referred as <em>Postgres</em>. As a database server, its primary function is to store data, supporting best practices and retrieve it later. Developers opt for this relational database as it is free, stable and flexible.</p>
<p>Deploying Postgres in a container is cost-efficient in terms of infrastructure, it also supports CI/CD development, and streamlines deployment and application management.</p>
<h2 id="pre-requisites">Pre-requisites</h2>
<p>This article assumes you have some idea of what <code>docker</code> is and how to use it. The pre-requisites to get started are:</p>
<ul>
<li>Docker daemon running in your machine</li>
<li>Account has been created in <a target="_blank" href="https://hub.docker.com/">docker hub</a></li>
</ul>
<blockquote>
<p>If you are completely new to <code>docker</code>, I would highly recommend starting from <a target="_blank" href="https://www.docker.com/get-started">here</a>.</p>
</blockquote>
<p>We could either choose to run <code>postgres</code> with direct docker commands or use a declarative <code>docker-compose</code> file</p>
<hr />
<h2 id="option-1-running-postgres-with-direct-docker-commands">Option - 1: Running Postgres with direct docker commands</h2>
<h3 id="pull-postgres-image-from-docker-hub">Pull Postgres image from Docker Hub</h3>
<p>To download a particular image, or set of images (i.e., a repository), use <code>docker pull</code>. If no tag is provided, Docker Engine uses the :latest tag as a default. The <code>docker pull</code> command syntax is</p>
<pre><code>docker pull [OPTIONS] NAME[<span class="hljs-symbol">:TAG|</span>@DIGEST]
</code></pre><p>Run the below command on <strong>terminal</strong> to pull the <code>postgres</code> image from docker hub</p>
<pre><code>$ docker pull postgres
</code></pre><p>You would see the below outcome if the action is successful</p>
<pre><code>$ docker pull postgres
Using default tag: latest
<span class="hljs-section">latest: Pulling from library/postgres</span>
<span class="hljs-section">d121f8d1c412: Already exists</span>
<span class="hljs-section">9f045f1653de: Pull complete</span>
<span class="hljs-section">fa0c0f0a5534: Pull complete</span>
<span class="hljs-section">54e26c2eb3f1: Pull complete</span>
<span class="hljs-section">cede939c738e: Pull complete</span>
<span class="hljs-section">69f99b2ba105: Pull complete</span>
<span class="hljs-section">218ae2bec541: Pull complete</span>
<span class="hljs-section">70a48a74e7cf: Pull complete</span>
<span class="hljs-section">c0159b3d9418: Pull complete</span>
<span class="hljs-section">353f31fdef75: Pull complete</span>
<span class="hljs-section">03d73272c393: Pull complete</span>
<span class="hljs-section">8f89a54571bf: Pull complete</span>
<span class="hljs-section">4885714928b5: Pull complete</span>
<span class="hljs-section">3060b8f258ec: Pull complete</span>
<span class="hljs-section">Digest: sha256:0171a93d62342d2ab2461069609175674d2a1809a1ad7ce9ba141e2c09db1156</span>
<span class="hljs-section">Status: Downloaded newer image for postgres:latest</span>
<span class="hljs-section">docker.io/library/postgres:latest</span>
</code></pre><p>You could verify the local cache of the images available on the system using the below command.</p>
<pre><code>$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
postgres                         latest              <span class="hljs-number">817</span>f2d3d51ec        <span class="hljs-number">2</span> days ago          <span class="hljs-number">314</span>MB
</code></pre><h3 id="start-the-postgres-container">Start the Postgres container</h3>
<p>Now that the <code>postgres</code> image is on our system, we need to use the <code>docker run</code> command to start the container using the image that we just downloaded. The syntax for <code>docker run</code> is as follows</p>
<pre><code>docker run <span class="hljs-comment">--name [container_name] -e POSTGRES_PASSWORD=[your_password] -d postgres</span>
</code></pre><p>Create a docker volume as below</p>
<pre><code>$ docker volume <span class="hljs-keyword">create</span> <span class="hljs-comment">--name=pgdata</span>
</code></pre><p>Use the below command to start the <code>postgres</code> container using the <code>postgres:latest</code> image we just downloaded. We run it with couple of parameters like port, volumes etc.</p>
<pre><code>$ docker run --rm --name pg-docker -e POSTGRES_PASSWORD=password -d -p <span class="hljs-number">5432</span>:<span class="hljs-number">5432</span> -v $HOME/docker/volumes/postgres:/<span class="hljs-keyword">var</span>/lib/postgresql/pgdata  postgres
</code></pre><p>After running the above command we should be getting a hash value of the container just started. As you observe, we have provided various options to the <code>docker run</code> command:</p>
<ul>
<li><em>--rm: This informs the docker to automatically remove the container and it’s associated file system upon exit. If we are running so many short term containers, it is a good practice to pass <code>rm</code> flag to the docker run command for automatic cleanup and avoid disk space issues(this will be a life saver!). We can anyway use the <code>-v</code> option to persist data beyond the lifecycle of a container</em></li>
<li><em>--name: An identifier for the container. We can choose any name we want. Note that two existing (even if they are stopped) containers cannot have the same name. In order to re-use a name, you would either need to pass the <code>rm</code> flag to the docker run command or explicitly remove the container by using the command <code>docker rm [container name]</code></em></li>
<li><em>-e: Exposes environment variable of name <code>POSTGRES_PASSWORD</code> with value <code>password</code> to the container. This environment variable sets the superuser password for PostgreSQL. We can set <code>POSTGRES_PASSWORD</code> to anything we like. There are additional environment variables you can set. These include <code>POSTGRES_USER</code> and <code>POSTGRES_DB</code>. <code>POSTGRES_USER</code> sets the superuser name. If not provided, the superuser name defaults to <code>postgres</code>. <code>POSTGRES_DB</code> sets the name of the default database to setup. If not provided, it defaults to the value of <code>POSTGRES_USER</code></em></li>
<li><em>-d: Launches the container in a detached mode or in other words, in the background - so that we could use the terminal for other operations</em></li>
<li><em>-p: Binds port <code>5432</code> on localhost to port <code>5432</code> within the container. This option enables applications running out side of the container to be able to connect to the Postgres server running inside the container</em></li>
<li><em>-v: Mounts <code>$HOME/docker/volumes/postgres</code> on the host machine to the container side volume path <code>/var/lib/postgresql/pgdata</code> created inside the container - here <code>pgdata</code> is the docker volume that we created earlier. This ensures that <code>postgres</code> data persists even after the container is removed</em></li>
</ul>
<blockquote>
<p>There are other options also available for <code>docker run</code>. For a complete list of options please check <a target="_blank" href="https://docs.docker.com/engine/reference/run/">here</a></p>
</blockquote>
<p>Verify the active instance of the <code>postgres</code> server using:</p>
<pre><code><span class="hljs-string">$</span> <span class="hljs-string">docker</span> <span class="hljs-string">ps</span>
<span class="hljs-string">CONTAINER</span> <span class="hljs-string">ID</span>        <span class="hljs-string">IMAGE</span>               <span class="hljs-string">COMMAND</span>                  <span class="hljs-string">CREATED</span>             <span class="hljs-string">STATUS</span>              <span class="hljs-string">PORTS</span>                    <span class="hljs-string">NAMES</span>
<span class="hljs-string">62f6afbe7802</span>        <span class="hljs-string">postgres</span>            <span class="hljs-string">"docker-entrypoint.s…"</span>   <span class="hljs-number">3</span> <span class="hljs-string">seconds</span> <span class="hljs-string">ago</span>       <span class="hljs-string">Up</span> <span class="hljs-number">2</span> <span class="hljs-string">seconds</span>        <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">:5432-&gt;5432/tcp</span>   <span class="hljs-string">pg-docker</span>
</code></pre><h3 id="connect-to-the-postgres-db">Connect to the postgres DB</h3>
<p>The command syntax to connect to any container is:</p>
<pre><code><span class="hljs-selector-tag">docker</span> <span class="hljs-selector-tag">exec</span> <span class="hljs-selector-tag">-it</span> <span class="hljs-selector-attr">[container_name]</span> <span class="hljs-selector-tag">psql</span> <span class="hljs-selector-tag">-U</span> <span class="hljs-selector-attr">[postgres_user]</span>
</code></pre><ul>
<li><h3 id="if-you-are-using-these-steps-in-a-windows-machine">If you are using these steps in a windows machine</h3>
</li>
</ul>
<p>First we need to connect to the <code>postgres</code> container in-order to do some work</p>
<pre><code>$ docker exec -it pg-docker bash
root@62f6afbe7802<span class="hljs-symbol">:/</span><span class="hljs-comment">#</span>
</code></pre><p>As you have observed, now we have root access to the container. You may also notice the container ID that we have been using since starting.</p>
<p>Now we change the user to <code>postgres</code> which is the default super user created by default for us.</p>
<pre><code>$ root@62f6afbe7802<span class="hljs-symbol">:/</span><span class="hljs-comment"># su postgres</span>
postgres@62f6afbe7802<span class="hljs-symbol">:/</span>$
</code></pre><p>Once logged in with the <code>postgres</code> user, run the <code>psql</code> command as below</p>
<pre><code>$ postgres@62f6afbe7802<span class="hljs-symbol">:/</span>$ psql
psql (<span class="hljs-number">13.0</span> (Debian <span class="hljs-number">13.0</span>-<span class="hljs-number">1</span>.pgdg10<span class="hljs-number">0</span>+<span class="hljs-number">1</span>))
Type <span class="hljs-string">"help"</span> <span class="hljs-keyword">for</span> help.
</code></pre><p>Now we are in the <code>psql</code> interactive session, so that we could use all the <code>postgres</code> related activities.</p>
<p>For example to check the connection information issue the below command</p>
<pre><code>$ postgres=# \conninfo
You are connected <span class="hljs-keyword">to</span> <span class="hljs-keyword">database</span> "postgres" <span class="hljs-keyword">as</span> <span class="hljs-keyword">user</span> "postgres" via socket <span class="hljs-keyword">in</span> "/var/run/postgresql" at port "5432".
</code></pre><p>To create a new database with name <code>testdb</code> run this</p>
<pre><code>$ postgres=# <span class="hljs-keyword">create</span> <span class="hljs-keyword">database</span> testdb;
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">DATABASE</span>
</code></pre><p>In-order to list all the available databases use this</p>
<pre><code>$ postgres=<span class="hljs-comment"># \l</span>
                                 List of databases
   Name    <span class="hljs-params">|  Owner   |</span> Encoding <span class="hljs-params">|  Collate   |</span>   Ctype    <span class="hljs-params">|   Access privileges
-----------+----------+----------+------------+------------+-----------------------
 postgres  |</span> postgres <span class="hljs-params">| UTF8     |</span> en_US.utf8 <span class="hljs-params">| en_US.utf8 |</span>
 template<span class="hljs-number">0</span> <span class="hljs-params">| postgres |</span> UTF8     <span class="hljs-params">| en_US.utf8 |</span> en_US.utf8 <span class="hljs-params">| =c/postgres          +
           |</span>          <span class="hljs-params">|          |</span>            <span class="hljs-params">|            |</span> postgres=CTc/postgres
 template1 <span class="hljs-params">| postgres |</span> UTF8     <span class="hljs-params">| en_US.utf8 |</span> en_US.utf8 <span class="hljs-params">| =c/postgres          +
           |</span>          <span class="hljs-params">|          |</span>            <span class="hljs-params">|            |</span> postgres=CTc/postgres
 testdb    <span class="hljs-params">| postgres |</span> UTF8     <span class="hljs-params">| en_US.utf8 |</span> en_US.utf8 <span class="hljs-params">|
(4 rows)

postgres=#</span>
</code></pre><p>As you observe, the newly create database <code>testdb</code> is present in the list. Similarly you can run other <code>psql</code> commands and use the DB for development and management tasks.</p>
<p>Use exit multiple times to come out of different shells</p>
<pre><code>$ postgres=<span class="hljs-comment"># exit</span>
postgres@62f6afbe7802<span class="hljs-symbol">:/</span>$ exit
exit
root@62f6afbe7802<span class="hljs-symbol">:/</span><span class="hljs-comment"># exit</span>
exit
$
</code></pre><blockquote>
<p>Please refer <a target="_blank" href="https://www.postgresqltutorial.com/psql-commands/">this</a> for some of the commonly used actions within <code>psql</code></p>
</blockquote>
<p>You can also connect with a GUI client like Dbweaver, PgAdmin etc.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1601282861907/SQCpdeAWe.png" alt="image.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1601282942329/w_4_kdZd9.png" alt="image.png" /></p>
<blockquote>
<p>Please note that you may observe issues while connecting to <code>postgres</code> server running in docker container in windows. As per the <a target="_blank" href="https://github.com/sameersbn/docker-postgresql/issues/112">issue</a> we need to first stop the existing <code>postgres</code> window service if any, to proceed with login.</p>
</blockquote>
<ul>
<li><h3 id="if-you-are-using-these-steps-in-a-linux-machine">If you are using these steps in a linux machine</h3>
</li>
</ul>
<p>Follow these steps for Linux machine - most of them are similar with minor modifications (<em>tested in Ubuntu 16.04 LTS</em>)</p>
<pre><code>$ sudo docker exec -it pg-docker psql -U postgres
psql (<span class="hljs-number">13.0</span> (Debian <span class="hljs-number">13.0</span>-<span class="hljs-number">1</span>.pgdg10<span class="hljs-number">0</span>+<span class="hljs-number">1</span>))
Type <span class="hljs-string">"help"</span> <span class="hljs-keyword">for</span> help.

$ postgres=<span class="hljs-comment"># \conninfo</span>
You are connected to database <span class="hljs-string">"postgres"</span> as user <span class="hljs-string">"postgres"</span> via socket <span class="hljs-keyword">in</span> <span class="hljs-string">"/var/run/postgresql"</span> at port <span class="hljs-string">"5432"</span>.
postgres=<span class="hljs-comment"># create database testdb;</span>
CREATE DATABASE
postgres=<span class="hljs-comment"># \l</span>
                                 List of databases
   Name    <span class="hljs-params">|  Owner   |</span> Encoding <span class="hljs-params">|  Collate   |</span>   Ctype    <span class="hljs-params">|   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
 postgres  |</span> postgres <span class="hljs-params">| UTF8     |</span> en_US.utf8 <span class="hljs-params">| en_US.utf8 |</span> 
 template<span class="hljs-number">0</span> <span class="hljs-params">| postgres |</span> UTF8     <span class="hljs-params">| en_US.utf8 |</span> en_US.utf8 <span class="hljs-params">| =c/postgres          +
           |</span>          <span class="hljs-params">|          |</span>            <span class="hljs-params">|            |</span> postgres=CTc/postgres
 template1 <span class="hljs-params">| postgres |</span> UTF8     <span class="hljs-params">| en_US.utf8 |</span> en_US.utf8 <span class="hljs-params">| =c/postgres          +
           |</span>          <span class="hljs-params">|          |</span>            <span class="hljs-params">|            |</span> postgres=CTc/postgres
 testdb    <span class="hljs-params">| postgres |</span> UTF8     <span class="hljs-params">| en_US.utf8 |</span> en_US.utf8 <span class="hljs-params">| 
(4 rows)

postgres=#</span>
</code></pre><blockquote>
<p>We could also just run the <code>docker run</code> command as above, without having to pull the image separately. Docker will pull the required image if its not present in the local cache</p>
</blockquote>
<hr />
<h2 id="option-2-running-postgres-with-docker-compose">Option - 2: Running Postgres with docker compose</h2>
<p>Instead of running all the docker commands individually, we can leverage <code>docker-compose</code> file to start the <code>postgres</code> service. It provides us a way to specify all the required configuration parameters declaratively in a file.</p>
<blockquote>
<p>Ensure you have the docker-compose installed and available. It not please follow the steps <a target="_blank" href="https://docs.docker.com/compose/install/">here</a> to get it</p>
</blockquote>
<h3 id="create-a-docker-compose-file">Create a docker-compose file</h3>
<ul>
<li><p>To ensure an easy and clean installation, we first create a new folder named <code>postgres</code> and move into that folder</p>
<pre><code>$ mkdir postgres
$ cd postgres
</code></pre></li>
<li><p>Next we use the docker-compose file to download the postgres image and get the service up and running. Using your favorite utility to create a YAML file as below(using nano/powershell)</p>
<pre><code>$ nano docker-compose.yml
</code></pre><p>OR </p>
<pre><code>$ ni docker-compose.yml
</code></pre></li>
<li>Add the below content to the <code>docker-compose</code> file and save</li>
</ul>
<pre><code><span class="hljs-attribute">version</span>: "3.8"

<span class="yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">db:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">always</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">POSTGRES_DB=postgres</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">POSTGRES_USER=postgres</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">POSTGRES_PASSWORD=password</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"5432:5432"</span>
<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">pgdata:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span></span>
</code></pre><p>The YAML configuration outlines the below:</p>
<ul>
<li><p><em>version:  file format version for <code>docker-compose</code></em></p>
</li>
<li><p><em>services: defines a service in compose file with name <code>db</code></em></p>
</li>
<li><p><em>image: uses the <code>postgres</code> image that uses the tag <code>latest</code> by default</em></p>
</li>
<li><p><em>restart: specifies a restart policy for how a container should or should not be restarted on exit.</em></p>
</li>
<li><p><em>environment: used to specify the environment variables used for starting the services</em></p>
</li>
<li><p><em>ports: 5432 is the default port number for PostgreSQL. We map the port 5432 from container to the same port on host machine</em></p>
</li>
<li><p><em>volumes: directive that mounts the source directories or volumes from host machine at target paths inside the container.</em></p>
</li>
</ul>
<blockquote>
<p>Please ensure to use the correct YAML file syntax. Would suggest to use the Visual Studio code to validate the file syntax before starting the service.</p>
</blockquote>
<h3 id="start-the-postgres-container-with-docker-compose">Start the postgres container with docker-compose</h3>
<ul>
<li>Run the below command to start the container using the <code>-d</code> flag</li>
</ul>
<pre><code>$ docker-compose up -d
</code></pre><ul>
<li>You can check the logs with the command</li>
</ul>
<pre><code>$ docker-compose logs -f
</code></pre><p>That's it! Now you are ready to use the <code>postgres</code> DB in your development activities. Use the same steps as mentioned above to connect to the DB.</p>
<p>Finally, use the below to stop the <code>postgres</code> service</p>
<pre><code>$ docker-compose <span class="hljs-keyword">stop</span>
</code></pre><blockquote>
<p>In-order to find the required image in the repository use the below steps</p>
</blockquote>
<h3 id="search-for-a-specific-image-in-docker-hub">Search for a specific image in docker hub</h3>
<p>Once you sign-up for the docker hub account, you can start using those credentials in terminal. Issue the below command to login to docker hub with your user details.</p>
<pre><code>$ docker <span class="hljs-keyword">login</span>
<span class="hljs-keyword">Login</span> <span class="hljs-keyword">with</span> your Docker ID <span class="hljs-keyword">to</span> push <span class="hljs-keyword">and</span> pull images <span class="hljs-keyword">from</span> Docker Hub. <span class="hljs-keyword">If</span> you don<span class="hljs-string">'t have a Docker ID, head over to https://hub.docker.com to create one.
Username: rakeshvardan
Password:
Login Succeeded</span>
</code></pre><p>If you are not familiar with the image name, you can even search the entire docker public repository as below and choose the appropriate images(always suggested to use official images from respective product teams)</p>
<pre><code>$ docker <span class="hljs-keyword">search</span> postgres
<span class="hljs-type">NAME</span>                                    DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
postgres                                The PostgreSQL <span class="hljs-keyword">object</span>-relational <span class="hljs-keyword">database</span> sy…   <span class="hljs-number">8383</span>                [OK]
</code></pre><hr />
<h2 id="running-multiple-instances-of-postgres-in-the-same-host">Running multiple instances of postgres in the same host</h2>
<p>We can start multiple postgres containers in the same host and use different versions of the servers for local development.</p>
<pre><code>$ docker run --rm  --name pg-docker-dev -e POSTGRES_PASSWORD=devpassword -d -p <span class="hljs-number">5432</span>:<span class="hljs-number">5432</span> -v $HOME/docker/volumes/postgresdev:/<span class="hljs-keyword">var</span>/lib/postgresql/pgdata  postgres
</code></pre><pre><code>$ docker run --rm  --name pg-docker-qa -e POSTGRES_PASSWORD=qapassword -d -p <span class="hljs-number">5433</span>:<span class="hljs-number">5432</span> -v $HOME/docker/volumes/postgresqa:/<span class="hljs-keyword">var</span>/lib/postgresql/pgdata  postgres
</code></pre><p>As you can see, we started 2 different <code>postgres</code> databases for 2 environments - one for <em>development</em> and another for <em>testing</em>. Once started you can use these DB's independently of each other. We have also mounted two different folders(<code>postgresdev</code> and <code>postgresqa</code>) for each of the database to persist the data in the host machine. A different port <code>5433</code> on the host is mapped for the qa DB. You can also use two different versions of <code>postgres</code> on the same machine if you have a requirement to do so.</p>
<pre><code><span class="hljs-string">$</span> <span class="hljs-string">docker</span> <span class="hljs-string">ps</span>
<span class="hljs-string">CONTAINER</span> <span class="hljs-string">ID</span>        <span class="hljs-string">IMAGE</span>               <span class="hljs-string">COMMAND</span>                  <span class="hljs-string">CREATED</span>             <span class="hljs-string">STATUS</span>              <span class="hljs-string">PORTS</span>                    <span class="hljs-string">NAMES</span>
<span class="hljs-string">48b82d7dc83f</span>        <span class="hljs-string">postgres</span>            <span class="hljs-string">"docker-entrypoint.s…"</span>   <span class="hljs-number">14</span> <span class="hljs-string">seconds</span> <span class="hljs-string">ago</span>      <span class="hljs-string">Up</span> <span class="hljs-number">12</span> <span class="hljs-string">seconds</span>       <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">:5433-&gt;5432/tcp</span>   <span class="hljs-string">pg-docker-qa</span>
<span class="hljs-string">e9814264368a</span>        <span class="hljs-string">postgres</span>            <span class="hljs-string">"docker-entrypoint.s…"</span>   <span class="hljs-number">42</span> <span class="hljs-string">seconds</span> <span class="hljs-string">ago</span>      <span class="hljs-string">Up</span> <span class="hljs-number">40</span> <span class="hljs-string">seconds</span>       <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">:5432-&gt;5432/tcp</span>   <span class="hljs-string">pg-docker-dev</span>
</code></pre><h2 id="conclusion">Conclusion</h2>
<p>I hope that now you got a basic idea on how to get started using <code>postgres</code> with <code>docker</code>. So next time whenever you are trying to install any software try checking for respective docker images and start using them. You will have lot of fun.</p>
<p>Thank you !</p>
<h3 id="references">References</h3>
<ul>
<li><a target="_blank" href="https://hub.docker.com/_/postgres">Docker hub repo for postgreSQL</a></li>
<li><a target="_blank" href="https://phoenixnap.com/kb/deploy-postgresql-on-docker#:~:text=Running%20PostgreSQL%20on%20Docker%20Containers,-Deploying%20a%20Postgres&amp;text=The%20first%20option%20uses%20Docker,file%20with%20all%20the%20specifications.">How To Deploy PostgreSQL On Docker Container</a></li>
<li><a target="_blank" href="https://hackernoon.com/dont-install-postgres-docker-pull-postgres-bee20e200198">Don’t install Postgres. Docker pull Postgres</a></li>
</ul>
]]></content:encoded></item></channel></rss>