Skip to main content

Latest interview

 1)Set vs Map

A Set is a collection dataset that needs to be composed of unique values, where a Map is when you have pairs of associated data when we map the keys to the value.13-Dec-2020

https://osgoodgunawan.medium.com/map-vs-set-vs-object-in-javascript-7345b455afcd#:~:text=The%20difference%20between%20Map%20and%20Set,the%20keys%20to%20the%20value.

2)Versioning in API

4)Role base access

5)https status code for 111,333 series

  • 1xx informational response – the request was received, continuing process
  • 2xx successful – the request was successfully received, understood, and accepted
  • 3xx redirection – further action needs to be taken in order to complete the request
  • This and all future requests should be directed to the given URI.
    s
  • 4xx client error – the request contains bad syntax or cannot be fulfilled
  • 5xx server error – the server failed to fulfil an apparently valid request

6)Types of execution context global vs local

7)sequilze and kex libraries

8)Different plugin in serverless


9)Duplex and transform stream:

n Node.js, Duplex and Transform streams are two types of streams that can be used for data processing and manipulation. While they share some similarities, they have different purposes and behaviors. Let's take a closer look at each of them:


Duplex Stream:


A Duplex stream is a type of stream that can be both readable and writable.

It allows data to flow in both directions, meaning you can both read from and write to a Duplex stream.

Examples of Duplex streams in Node.js include TCP sockets and the net.Socket class.

Duplex streams are commonly used for scenarios where you need bidirectional communication, such as networking protocols or file system operations.

Transform Stream:


A Transform stream is a type of Duplex stream that provides an additional feature: data transformation.

It takes input data, performs some transformations on it, and produces the transformed data as output.

Transform streams implement both the Readable and Writable interfaces, so they can be used as both input and output for other streams.

Examples of Transform streams in Node.js include zlib.createGzip() for compressing data and crypto.createCipher() for encrypting data.

Transform streams are useful when you need to modify or process data as it passes through the stream, such as data compression, encryption, or parsing.

In summary, Duplex streams allow bidirectional data flow, while Transform streams are a specialized type of Duplex streams that focus on data transformation. Duplex streams are typically used for communication purposes, while Transform streams are used for data manipulation during streaming operations.





thread vs process in nodejs


User

ChatGPT

In Node.js, threads and processes are two different concepts used for parallelism and concurrency. Let's explore the differences between them:


Threads:


Threads are lightweight units of execution within a process.

Multiple threads can exist within a single process and share the same memory space.

Threads allow for concurrent execution of multiple tasks within a single program.

In Node.js, JavaScript code runs on a single thread by default, often referred to as the "main" or "event loop" thread.

Node.js provides the worker_threads module to create and manage additional threads, enabling parallel execution of JavaScript code.

Worker threads can be used for CPU-intensive tasks or when you want to perform operations concurrently to leverage multiple CPU cores efficiently.

Processes:


Processes, on the other hand, are independent instances of a program that run in their own memory space.

Each process has its own memory and resources, including its own instance of the Node.js runtime.

Processes are isolated from each other and communicate via inter-process communication (IPC) mechanisms.

In Node.js, you can create child processes using the child_process module, which allows you to run separate Node.js instances or execute external programs.

Child processes can be used for various purposes, such as running blocking operations, utilizing multiple CPUs, or executing external programs that are not written in JavaScript.

In summary, threads are units of execution within a process, allowing for concurrent execution of tasks within the same memory space. Processes, on the other hand, are independent instances of a program with their own memory and resources. Node.js provides support for both threads and processes, with worker threads for parallel execution within a single program and child processes for running separate instances or external programs.



Spawn vs Work:

In Node.js, both spawn and fork are methods provided by the child_process module, but they have different use cases and behaviors:

  1. spawn:

    • The spawn method launches a new process to execute a command.
    • It allows you to execute external commands and run them asynchronously.
    • The child process created by spawn does not have a direct connection to the parent process other than the standard input/output streams.
    • You can communicate with the child process by reading from its stdout and stderr streams or writing to its stdin stream.
    • The child process created with spawn runs in a separate V8 instance, meaning it has its own JavaScript runtime environment.
  2. fork:

    • The fork method creates a new Node.js process and runs a JavaScript module in that process.
    • It is specifically designed for running separate instances of Node.js, enabling inter-process communication (IPC) between the parent and child processes.
    • The child process created by fork shares the same Node.js environment and global objects (such as process and console) with the parent process.
    • You can communicate between the parent and child processes using a messaging channel, allowing them to exchange messages.
    • The child process created with fork also runs in a separate V8 instance, but it shares the same Node.js runtime as the parent process.

To summarize, spawn is used for executing external commands and running them as separate processes, while fork is used for creating separate Node.js instances with shared environment and communication channels. Both spawn and fork create child processes that run in their own V8 instances.


10)Do not call next


11)Jwt token structure:

A JWT (JSON Web Token) consists of three parts separated by dots (.): the header, the payload, and the signature. The structure of a JWT is as follows:

<header>.<payload>.<signature>

The token consists of three parts separated by dots (.): Header (encoded): eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 Decoded: { "alg": "HS256", "typ": "JWT" } The header specifies the algorithm used for the signature, which is HMAC-SHA256 in this example. Payload (encoded): eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9 Decoded: { "sub": "1234567890", "name": "John Doe", "admin": true } The payload contains claims, such as the subject ("sub"), name ("name"), and admin status ("admin"). These claims provide information about the entity associated with the token. Signature (encoded): 2jQ-LVG3yP8mXssDmHSatfBskv8hgyvNjvj_kEMu4v0 The signature is generated by signing the encoded header and payload with a secret key using the specified algorithm. The signature ensures the integrity of the token and verifies that it has not been tampered with. So, the complete JWT structure is: Header: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 Payload: eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9 Signature: 2jQ-LVG3yP8mXssDmHSatfBskv8hgyvNjvj_kEMu4v0 When using the JWT, the client sends the complete token in the authorization header or another suitable location to authenticate and authorize requests to a server or API. The server can verify the token's authenticity by decoding and verifying the signature using the shared secret key. Regenerate response



conginto pool vs identity pool?


Certainly! Let's explain AWS Cognito Pool and AWS Identity Pool in a simple way:

  1. AWS Cognito User Pool:

    • Imagine you are managing a party and you want to keep track of the guests attending.
    • AWS Cognito User Pool is like a guest list for your application or service.
    • It provides a user directory or database where you can securely store and manage user identities, such as usernames, passwords, and other user attributes.
    • User Pool handles user registration, authentication, and basic user management tasks.
    • It allows users to sign up, sign in, and manage their account information.
    • User Pool integrates with other AWS services, enabling you to add user authentication and authorization features to your applications easily.
  2. AWS Cognito Identity Pool:

    • Now, let's imagine you are hosting a VIP event with special access privileges.
    • AWS Cognito Identity Pool is like a VIP pass or access control system for your application or service.
    • It provides temporary, limited-privilege AWS credentials to users or groups of users.
    • Identity Pool helps you control access to AWS resources, such as S3 buckets or DynamoDB tables, based on user identity.
    • When a user signs in with their credentials using a User Pool or other identity providers (like Facebook or Google), the Identity Pool verifies the user's identity and provides them with AWS credentials.
    • These temporary credentials can be used to access specific AWS services and resources that you define and control.
    • Identity Pool enables fine-grained access control, allowing you to grant different levels of access based on user roles or attributes.

In summary, AWS Cognito User Pool is a user directory and authentication service that manages user identities and basic authentication tasks. On the other hand, AWS Cognito Identity Pool is an access control system that provides temporary AWS credentials and fine-grained access control to AWS resources based on user identity. Together, they offer comprehensive user management, authentication, and authorization capabilities for your applications or services in the AWS ecosystem.




12)Different types of sqs  and timeout visibility sqs:

In AWS (Amazon Web Services), there is one primary type of Amazon Simple Queue Service (SQS), which is:


Standard SQS:

Standard SQS is the default and original SQS service provided by AWS.

It offers a reliable, scalable, fully managed message queuing service.

Messages are stored redundantly across multiple availability zones to provide high durability.

It provides at-least-once message delivery, where occasionally duplicate messages may occur.

Standard SQS offers nearly unlimited throughput and can handle a high volume of messages.

It provides a default retention period of 4 days for messages.

Standard SQS supports both FIFO (First-In-First-Out) and non-FIFO message ordering.

It is suitable for most distributed application use cases and provides a good balance between throughput and cost.

In addition to the standard SQS, AWS introduced FIFO queues, which are a specialized type of queue that guarantees message ordering:


FIFO (First-In-First-Out) SQS:

FIFO SQS queues guarantee the order of messages within a queue.

It ensures that messages are processed in the exact order they are sent.

FIFO queues provide exactly-once message processing, meaning duplicate messages are not introduced.

It achieves ordering and deduplication by using a message group ID that is specified by the sender.

FIFO SQS has a maximum throughput of 300 transactions per second (TPS) with batching or 300 messages per second without batching.

It has a higher cost compared to the standard SQS due to the additional guarantees provided.

It's important to choose the appropriate type of SQS based on your application requirements. If message ordering is critical, FIFO SQS is the recommended choice. However, if message ordering is not a strict requirement and you need high throughput and scalability, the standard SQS is typically sufficient.


Both types of SQS offer a reliable and scalable messaging service that can be integrated with various AWS services and applications to decouple and distribute workloads effectively.



By using the timeout visibility mechanism, AWS SQS ensures that messages are processed reliably, even if a consumer encounters issues during processing. It allows failed messages to be retried by other consumers and prevents them from being stuck in an unprocessed state indefinitely.


In summary, the visibility timeout in AWS SQS ensures that messages are temporarily hidden from other consumers while being processed. If a consumer fails to process a message within the timeout period, the message becomes visible again, allowing another consumer to retry its processing.




  1. Dead Letter Queue (DLQ):

    • To handle these failed messages, you can set up a Dead Letter Queue.
    • The DLQ acts as a holding area for failed messages that need further investigation or manual processing.
    • When a message fails to be processed from the primary queue, it is automatically moved to the DLQ.




No, in AWS SQS (Simple Queue Service), the Dead Letter Queue (DLQ) is not enabled by default. You need to explicitly configure a DLQ for an SQS queue if you want to use it.

The DLQ provides a way to handle and process messages that cannot be successfully processed by the primary queue. When a message fails to be processed after retries, it can be moved to the DLQ for further analysis or manual handling.

To configure a DLQ for an SQS queue, you need to follow these steps:

  1. Create a separate SQS queue that will serve as the DLQ.
  2. Specify the DLQ ARN (Amazon Resource Name) when configuring the primary queue's redrive policy


13) Provisioning concurrency and reserved concurrency in aws?

In AWS Lambda, "provisioned concurrency" and "reserved concurrency" are two features that allow you to control the concurrency and scalability of your serverless functions. Let's understand each concept:

  1. Provisioned Concurrency:

    • Provisioned concurrency is a feature that allows you to pre-warm AWS Lambda functions with a specified number of instances.
    • By configuring provisioned concurrency, you ensure that a set number of function instances are always ready and available to process requests without experiencing cold starts.
    • This feature is useful when you have predictable traffic patterns or when low latency is crucial for your application.
    • Provisioned concurrency ensures that your functions are initialized and ready to serve requests immediately, reducing the impact of cold starts.
  2. Reserved Concurrency:

    • Reserved concurrency is a feature that allows you to limit the maximum number of concurrent invocations for a specific Lambda function.
    • With reserved concurrency, you can set an upper limit on the number of concurrent executions of a function, regardless of the number of invocations or available resources.
    • This feature is useful when you want to control the scaling behavior of your function or ensure that it doesn't consume all available resources, especially if your function interacts with other dependent resources.
    • Reserved concurrency allows you to define a hard limit on the number of concurrent executions, protecting other resources and preventing resource exhaustion.


14)cold start and any provision by aws?

In AWS, "cold start" refers to the delay or latency experienced when initializing a serverless function, such as an AWS Lambda function, for the first time or after a period of inactivity. When a serverless function receives a request, the infrastructure needs to allocate and provision resources to execute the function code. This provisioning process, which includes loading the runtime environment, initializing dependencies, and setting up network connections, introduces a slight delay known as a cold start.

To mitigate the impact of cold starts and improve the overall performance of serverless applications, you can employ various strategies:

  1. Provisioning Concurrent Executions: By configuring the desired number of concurrent executions for your serverless function, you can maintain a pool of pre-initialized instances, reducing the likelihood of experiencing cold starts. Provisioning more concurrent executions can help keep the function warm and readily available to handle incoming requests.

  2. Scheduled Invocations: To avoid cold starts altogether, you can schedule periodic invocations of your serverless function, ensuring it remains warm and ready to respond to requests. Scheduling invocations at regular intervals keeps the function's runtime environment active and reduces the chances of encountering cold starts during peak usage.

  3. Request Batching and Caching: By aggregating requests or implementing caching mechanisms, you can reduce the number of individual invocations, thereby mitigating the impact of cold starts. Batching requests allows you to process multiple requests in a single warm invocation, improving overall efficiency.

Properly managing cold starts is crucial for optimizing the performance and responsiveness of serverless applications. Employing strategies to minimize cold starts and leveraging features like concurrent executions and scheduled invocations can help ensure a smooth and efficient serverless experience.

Regarding "provisioning" in AWS, it generally refers to the process of allocating and configuring resources to meet the requirements of your application or workload. Provisioning involves setting up and preparing various AWS services, such as compute instances (EC2), databases (RDS), queues (SQS), or storage (S3), to ensure they are available and properly configured to handle your application's needs.

Provisioning can include tasks like:

  1. Choosing the appropriate instance types, storage options, and configurations for your EC2 instances.
  2. Configuring and optimizing database instances, specifying parameters like storage size, CPU, memory, and replication settings.
  3. Setting up networking and security configurations, including Virtual Private Cloud (VPC) settings, security groups, and access control policies.
  4. Configuring scaling options, such as auto-scaling groups, to automatically adjust resources based on demand.
  5. Configuring load balancers to distribute traffic and improve availability and performance.
  6. Defining resource quotas, access controls, and billing configurations.

Proper provisioning ensures that your AWS resources are appropriately sized, configured, and ready to handle the workload efficiently. It involves making informed decisions based on your application's requirements, performance expectations, scalability needs, and budget considerations.

15)Plugins:

  - serverless-iam-roles-per-function
  - serverless-plugin-include-dependencies


Flatten object

var data = {

    name:'Xoriant',

    place:'Mumbai',

    address:{

        city:'Mumbai',

        geo: {

            lat:'40.5897200',

            long:'49.6686100'

        },

        country:'India'

    }

  };

function flattenObject(obj, parentKey = '') {
    let flattenedObject = {};
  
    for (let key in obj) {
      if (obj.hasOwnProperty(key)) {
        let newKey = parentKey ? `${parentKey}.${key}` : key;
        if (typeof obj[key] === 'object' && obj[key] !== null) {
          Object.assign(flattenedObject, flattenObject(obj[key], newKey));
        } else {
          flattenedObject[newKey] = obj[key];
        }
      }
    }
  
    return flattenedObject;
  }
  
  // Usage:
  const data = {
    name: 'Xoriant',
    place: 'Mumbai',
    address: {
      city: 'Mumbai',
      geo: {
        lat: '40.5897200',
        long: '49.6686100'
      },
      country: 'India'
    }
  };
  
  const flattenedData = flattenObject(data);
  console.log(flattenedData);



// (function () {

//   var a = b = 3;

// })();


// console.log("a defined? " + (typeof a !== 'undefined'));

// console.log("b defined? " + (typeof b !== 'undefined'));



// function foo1() {

//   return {

//     bar: "hello"

//   };

// }




// function foo2() {

//   return
//   {

//     bar: "hello"

//   };

// }




// console.log(foo1());

// console.log(foo2());




(function () {

  console.log(1);

  setTimeout(function () { console.log(2) }, 1000);

  setTimeout(function () { console.log(3) }, 0);

  console.log(4);

})();

Comments

Popular posts from this blog

Node.js: Extract text from image using Tesseract.

In this article, we will see how to extract text from images using Tesseract . So let's start with this use-case, Suppose you have 300 screenshot images in your mobile which has an email attribute that you need for some reason like growing your network or for email marketing. To get an email from all these images manually into CSV or excel will take a lot of time. So now we will check how to automate this thing. First, you need to install Tesseract OCR( An optical character recognition engine ) pre-built binary package for a particular OS. I have tested it for Windows 10. For Windows 10, you can install  it from here. For other OS you make check  this link. So once you install Tesseract from windows setup, you also need to set path variable probably, 'C:\Program Files\Tesseract-OCR' to access it from any location. Then you need to install textract library from npm. To read the path of these 300 images we can select all images and can rename it to som...

Node.js: Bundling your Node.js application to single executable for Windows.

In this article, we will see how to bundle Node.js application to a single executable for Windows. What's the need? Well recently, I had taken a work where I needed to convert pdf's(Of similar format) to excel sheet. So I was reading the pdf's from a folder in desktop and I was storing the output excel sheet into a separate folder on the desktop. I used Node.js for the program. Now the client wanted it to install the program on 25 windows machine and his budget was really low. So it was also not possible for me to install node.js for 25 machines and then install the required dependency for each one. One of the solution: While I was searching for an easy solution I found this amazing npm module pkg . This module can make your node.js app work like plug and play type. No need to install Node.js on the client machine or any other dependency.  It helps to make a commercial or trial version of your node.js application without exposing the source code. I found ...

Globant part 1

 1)call,apply,bind example? Ans: a. call Method: The call method is used to call a function with a given this value and arguments provided individually. Javascript code: function greet(name) {   console.log(`Hello, ${name}! I am ${this.role}.`); } const person = {   role: 'developer' }; greet.call(person, 'Alice'); // Output: Hello, Alice! I am developer. In this example, call invokes the greet function with person as the this value and passes 'Alice' as an argument. b. apply Method: The apply method is similar to call, but it accepts arguments as an array. Javascript code: function introduce(language1, language2) {   console.log(`I can code in ${language1} and ${language2}. I am ${this.name}.`); } const coder = {   name: 'Bob' }; introduce.apply(coder, ['JavaScript', 'Python']); // Output: I can code in JavaScript and Python. I am Bob. Here, apply is used to invoke introduce with coder as this and an array ['JavaScript', 'Pyt...