Data Archives - Exatosoftware https://exatosoftware.com/tag/data/ Digital Transformation Fri, 11 Apr 2025 05:40:20 +0000 en-US hourly 1 https://exatosoftware.com/wp-content/uploads/2024/12/cropped-exatosoftware-fav-icon-32x32.png Data Archives - Exatosoftware https://exatosoftware.com/tag/data/ 32 32 235387666 How to access S3 bucket from another account https://exatosoftware.com/how-to-access-s3-bucket-from-another-account/ Mon, 25 Nov 2024 11:59:49 +0000 https://exatosoftware.com/?p=18527 Amazon Web Services (AWS) offers the highly scalable, reliable, and secure Amazon Simple Storage Service (S3) for object storage. Several factors make accessing S3 buckets crucial, especially in the context of cloud computing and data management: 1. Data Storage: S3 is used to store a variety of data, including backups, log files, documents, images, and […]

The post How to access S3 bucket from another account appeared first on Exatosoftware.

]]>

Amazon Web Services (AWS) offers the highly scalable, reliable, and secure Amazon Simple Storage Service (S3) for object storage. Several factors make accessing S3 buckets crucial, especially in the context of cloud computing and data management:

1. Data Storage: S3 is used to store a variety of data, including backups, log files, documents, images, and videos. Users and applications can access S3 buckets to retrieve and store this data.

2. Data Backup and Recovery: S3, a dependable and affordable choice for data backup and disaster recovery, is frequently used. Users can retrieve backup data from S3 buckets when necessary.

3. Web hosting: S3 can be used to deliver web content like HTML files, CSS, JavaScript, and images as well as static websites and their associated static files. Serving this content to website visitors requires access to S3 buckets.

4. Data Sharing: S3 offers a method for securely sharing data with others. You can give access to particular objects in your S3 bucket to other AWS accounts or even the general public by granting specific permissions.

5. Data analytics: S3 is frequently used by businesses as a “data lake” to store massive amounts of structured and unstructured data. For data scientists and analysts who need to process, analyze, and gain insights from this data using tools like AWS Athena, Redshift, or outside analytics platforms, access to S3 buckets is essential.

6. Content Delivery: S3 and Amazon CloudFront, a content delivery network (CDN), can be combined to deliver content quickly and globally. CloudFront distributions must be configured in order to access S3 buckets.

7. Application Integration: A wide variety of programs and services, both inside and outside of AWS, can integrate with S3 to read from or write to S3 buckets. For applications to exchange data, this integration is necessary.

8. Log Storage: AWS services, such as AWS CloudTrail logs and AWS Elastic Load Balancing logs, frequently use S3 as a storage location for log files. Reviewing and analyzing these logs necessitates accessing S3 buckets.

9. Big Data and Machine Learning: Workloads involving big data and machine learning frequently use S3 as a data source. To run analytics, store datasets, and train machine learning models, data scientists and engineers use S3 buckets.

10. Compliance and Governance: Managing compliance and governance policies requires access to S3 buckets. Sensitive data stored in S3 can be monitored and audited by organizations to make sure it complies with legal requirements.

11. Data Archiving: S3 offers Glacier and Glacier Deep Archive as options for data archiving. If necessary, archived data must be retrieved using S3 buckets.

Above are a few special features of the S3 bucket in AWS. There are reasons why it is recommended for developers to keep applications fast and secure. There are other storage facilities provided by AWS. Let us have a look at how S3 bucket is different than these.

Difference between S3 bucket and other storage in AWS

To meet a range of needs and use cases, Amazon Web Services (AWS) provides a number of storage services. There are other storage services available in AWS besides Amazon S3, which is one of the most well-known and frequently used storage options. The following are some significant distinctions between Amazon S3 and other AWS storage options:

1. Amazon S3 vs. Amazon EBS (Object Storage vs. Block Storage)

– While Amazon Elastic Block Store (EBS) offers block-level storage for use with EC2 instances, Amazon S3 is an object storage service that is primarily used for storing and retrieving files and objects. In order to give applications and databases low-latency, high-performance storage, EBS volumes are typically attached to EC2 instances.

– While EBS is better suited for running applications that require block storage, such as databases, S3 is ideal for storing large amounts of unstructured data like images, videos, backups, and static website content.

2. Amazon Glacier (S3 Glacier) versus Amazon S3

– Amazon Glacier is a storage solution made for long-term backup and archival needs. Compared to S3, it offers cheaper storage, but with slower retrieval times. S3 is better suited for data that is accessed frequently, whereas Glacier is better for data that needs to be stored for a long time and accessed sparingly.

– Data retention guidelines and compliance requirements frequently use Glacier.

3. Amazon EFS (Elastic File System) vs. Amazon S3

– Network-attached storage for EC2 instances is provided by the fully managed, scalable file storage service known as Amazon EFS. It is intended for scenarios in which multiple instances require concurrent access to the same file system.

– Unlike EFS, which is a file storage service, S3 is an object storage service. Large-scale static data storage is better handled by S3, whereas shared file storage applications are better served by EFS.

4. Storage comparison between Amazon S3 and Amazon RDS (Relational Database Service)

– A managed database service called Amazon RDS offers storage for databases like PostgreSQL, MySQL, and others. Database-specific data is kept in the storage, which is closely related to the database engine.

S3 is an all-purpose object storage service; it is not just for the storage of databases. In addition to databases, it is frequently used to store backups, logs, and other application data.

5. Storage Options that are compatible with Amazon S3 versus Amazon S3

– Some AWS customers choose to use storage options from other vendors that are S3 compatible and can provide functionality similar to object storage while being compatible with S3 APIs. Compared to native Amazon S3, the performance, features, and cost of these options may vary.

6. Comparing Amazon S3 to Amazon FSx for Lustre and Amazon FSx for Windows File Systems

– Amazon FSx provides managed file storage solutions for Windows and Lustre workloads. It is designed for specific file system requirements and is not as versatile as S3 for storing and serving various types of data.

With the above comparison, it is clear that Amazon S3 is a versatile object storage service that’s suitable for a wide range of use cases involving unstructured data and file storage. Other AWS storage services, such as EBS, Glacier, EFS, RDS, and FSx, cater to more specialized storage needs like block storage, archival storage, file storage, and database storage. The choice of storage service depends on your specific application requirements and use cases.

How to access S3 bucket from your account

It can be said conclusively that accessing S3 buckets is essential for effectively using AWS services, managing data storage, serving web content, and integrating S3 with different applications and workflows. Modern cloud computing and data management techniques heavily rely on it.

To access an Amazon S3 (Simple Storage Service) bucket from your AWS (Amazon Web Services) account you can adhere to these general steps. Assuming you’ve already created an AWS account and configured the required permissions and credentials, follow the below steps:

1. Log in to the AWS Management Console by visiting https://aws.amazon.com.

– Enter the login information for your AWS account and click “Sign In to the Console”.

2. Find the S3 Service

– After logging in, look for “S3” in the AWS services search bar or under “Storage” in the AWS services menu.

– To access the S3 dashboard, click on “S3”.

3. Create or Access a Bucket

– From the list of buckets on the S3 dashboard, you can click on the name of an existing bucket if you want to access it.

– If you want to create a new bucket, click the “Create bucket” button and adhere to the instructions to give it a special name.

4. Setup Bucket Permissions

– Permissions govern who has access to your S3 bucket. To grant access, permissions must be set up.

– Navigate to the “Permissions” tab of your bucket.

– Use bucket policies, Access Control Lists (ACLs), or IAM (Identity and Access Management) policies to grant appropriate permissions to users, roles, or groups within your AWS account.

5. Access the S3 Bucket

– Once you have set up the necessary permissions, you can access your S3 bucket using various methods:

a. AWS Management Console: You can browse and manage your S3 objects through the AWS Management Console’s web interface.

b. AWS CLI (Command Line Interface): If you have the AWS CLI installed and configured with the appropriate IAM user credentials, you can use the following command to list the contents of a bucket, for example:


```bash

aws s3 ls s3://your-bucket-name

```

c. AWS SDKs: You can programmatically interact with your S3 bucket using AWS SDKs for a variety of programming languages, such as Python, Java, and Node.js.

6. Secure Access: To keep your S3 data secure, make sure you adhere to AWS security best practices. This entails proper permission administration, encryption, and consistent setting audits for your bucket.

In order to prevent unauthorized access or data breaches, keep in mind that managing access to S3 buckets should be done carefully. Always adhere to AWS security best practices, and only allow those who truly need access.

How to access S3 bucket from another account

You must configure the necessary permissions and policies to permit access in order to access an Amazon S3 bucket from another AWS account. This typically entails setting up a cross-account access policy on the S3 bucket in the source AWS account and creating an IAM (Identity and Access Management) role in the target AWS account. The general steps to accomplish this are as follows:

The S3 bucket’s owner’s AWS account is the source.

1. Create an IAM Policy:

– Navigate to the IAM console.

– Create a new IAM policy that grants the desired permissions on the S3 bucket. You can use the AWS managed policies like `AmazonS3ReadOnlyAccess` as a starting point or create a custom policy.

2. Attach the Policy to an IAM User or Group (Optional):

– You can attach the policy to an IAM user or group if you want to grant access to specific users or groups in the target AWS account.

3. Create a Cross-Account Access Role:

– Navigate to the IAM console.

– Create a new IAM role with a trust relationship allowing the target AWS account to assume this role. Here’s an example of a trust policy:


```json

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow",

"Principal": {

"AWS": "arn:aws:iam::TARGET_ACCOUNT_ID:root"

},

"Action": "sts:AssumeRole"

}

]

}

```

Replace `TARGET_ACCOUNT_ID` with the AWS account ID of the target AWS account.

4. Attach the IAM Policy to the Role:

– Attach the IAM policy you created in step 1 to the role.

5. Note the Role ARN:

– Make a note of the ARN (Amazon Resource Name) of the role you created.

In the target AWS account:

6. Create an IAM Role:

– Navigate to the IAM console.

– Create an IAM role that your EC2 instances or applications in this account will assume to access the S3 bucket in the source account.

7. Add an Inline Policy to the Role:

– Attach an inline policy to the role you created in step 6. This policy should grant the necessary permissions to access the S3 bucket in the source account. Here’s an example policy:



```json

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow",

"Action": [

"s3:GetObject",

"s3:ListBucket"

],

"Resource": [

"arn:aws:s3:::SOURCE_BUCKET_NAME/*",

"arn:aws:s3:::SOURCE_BUCKET_NAME"

]

}

]

}

```

Replace `SOURCE_BUCKET_NAME` with the name of the S3 bucket in the source account.

8. Use the Role in Your Application/Instance:

– When launching EC2 instances or running applications in this account that need access to the S3 bucket, specify the IAM role you created in step 6 as the instance or application’s IAM role.

With these steps completed, the target AWS account can assume the role in the source account to access the S3 bucket. This approach ensures secure and controlled access between AWS accounts.

Developers may find it useful to access an Amazon S3 (Simple Storage Service) bucket from another AWS account in a variety of circumstances, frequently involving teamwork, security, and data sharing.

Advantages for developers

1. Cross-Account Collaboration: Developers may need to work together to share data stored in S3 buckets when several AWS accounts are involved in a project or organization. Developers from various teams or organizations can easily collaborate by granting access to another AWS account.

2. Security Isolation: Occasionally, developers want to maintain data security within a single AWS account while allowing external parties, such as contractors or third-party vendors, access to certain resources. You can securely share data while keeping control over it by granting another account access to an S3 bucket.

3. Data Backup and Restore: Cross-account access can be used by developers to speed up data backup and restore procedures. For example, to ensure data redundancy and disaster recovery, you can set up a backup AWS account to have read-only access to the source AWS account’s S3 bucket.

4. Data Sharing: You can grant read-only access to S3 buckets in your AWS account if you create applications that need to share data with third-party users or services. When distributing files, media, or other assets that must be accessed by a larger audience, this is especially helpful.

5. Resource Isolation: You might want to isolate resources between various AWS accounts when using multiple environments (such as development, staging, and production). By controlling who can read or modify data in each environment when you access an S3 bucket from another account, you can increase security and lower the possibility of unintentional data changes.

6. Compliance and Auditing: Strict access controls and job separation may be required to meet certain regulatory requirements or compliance standards. By offering a controlled and auditable method of sharing data, granting access from another AWS account can aid in ensuring compliance with these standards.

7. Fine-Grained Access Control: When granting access to S3 buckets from another account, AWS Identity and Access Management (IAM) policies can be used to define fine-grained permissions. To increase security and access control, developers can specify which operations (like read, write, and delete) are permitted or disallowed for particular resources.

8. Cost Allocation: Accessing S3 buckets from another account enables you to track more accurately usage and costs, when multiple AWS accounts are involved. To comprehend resource usage across accounts, you can set up thorough billing and cost allocation reports.

You typically create an IAM role in the target account and specify permissions for that role in order to enable cross-account access to an S3 bucket. The source account can then take on the role and securely access the S3 bucket after you create a trust relationship between it and the target account.

While cross-account access may be advantageous, keep in mind that it needs to be carefully configured and monitored to ensure security and adherence to your organization’s policies. To maintain a safe and organized AWS environment, it is essential to manage IAM policies, roles, and permissions properly.

The post How to access S3 bucket from another account appeared first on Exatosoftware.

]]>
18527
How to perform Create, Read, Update, and Delete operations using MongoDB https://exatosoftware.com/how-to-perform-create-read-update-and-delete-operations-using-mongodb/ Fri, 22 Nov 2024 06:14:10 +0000 https://exatosoftware.com/?p=17409 Difference in CRUD operations in SQL and NoSQL Databases CRUD (Create, Read, Update, Delete) operations are fundamental actions performed on data in databases. The differences in how these operations are handled between SQL (relational databases) and NoSQL (non-relational databases) databases are rooted in the underlying data models and structures. SQL Databases Data Model: SQL databases […]

The post How to perform Create, Read, Update, and Delete operations using MongoDB appeared first on Exatosoftware.

]]>

Difference in CRUD operations in SQL and NoSQL Databases

CRUD (Create, Read, Update, Delete) operations are fundamental actions performed on data in databases. The differences in how these operations are handled between SQL (relational databases) and NoSQL (non-relational databases) databases are rooted in the underlying data models and structures.

SQL Databases

Data Model:
SQL databases use a structured, tabular data model.
Data is organized into tables with predefined schemas.
Tables have rows and columns, and relationships between tables are established using foreign keys.

Create (Insert): Data is inserted into specific tables, adhering to the table’s predefined structure.


sql INSERT INTO table_name (column1, column2, column3, ...) VALUES (value1, value2, value3, ...);
Read (Select): Data is queried using SQL SELECT statements.
sql SELECT column1, column2, ... FROM table_name WHERE condition;
Update (Update): Data is modified in existing rows.
sql UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE condition;
Delete (Delete): Rows are deleted from a table based on specified conditions.
sql DELETE FROM table_name WHERE condition;

 

NoSQL Databases

Data Model:

  • NoSQL databases employ various data models, including document-oriented, key-value, wide-column store, and graph databases.
    The structure is more flexible, and each document or item can have different fields.
  • CRUD Operations:
    Create (Insert): Data is typically inserted as documents, items, or key-value pairs without a predefined schema.
javascript db.collection_name.insert({ field1: value1, field2: value2, ... });
  • Read (Find/Get): Data is retrieved based on queries, often using a flexible JSON-like syntax.Example in MongoDB:
javascript db.collection_name.find({ field: value });
  • Update (Update/Modify): Existing documents or items are updated.Example in MongoDB:

javascript db.collection_name.update({ field: value }, { $set: { new_field: new_value } });
  • Delete (Remove/Delete): Documents or items are removed based on specified conditions.

javascript db.collection_name.remove({ field: value });

Key Differences

  • Schema:
    SQL databases have a rigid, predefined schema
    NoSQL databases are schema-less or have a dynamic schema.
  • Flexibility:
    SQL databases offer less flexibility in terms of changing the schema.
    NoSQL databases provide more flexibility as the data model can evolve over time.
  • Scaling:
    SQL databases typically scale vertically (adding more resources to a single server).
    NoSQL databases are often designed to scale horizontally (adding more servers to distribute the load).

CRUD Operations in MongoDB

MongoDB is a NoSQL database that stores data in a flexible, JSON-like format called BSON. Here’s a brief explanation and examples of how to perform CRUD operations in MongoDB using its official MongoDB Node.js driver.

1.Create (Insert)
To insert data into MongoDB, you can use the insertOne or insertMany method. Here’s an example using insertOne:

const MongoClient = require('mongodb').MongoClient;
const url = 'mongodb://localhost:27017';
const dbName = 'mydatabase';
MongoClient.connect(url, { useNewUrlParser: true, useUnifiedTopology: true }, (err, client) => {
  if (err) throw err;
  const db = client.db(dbName);
  const collection = db.collection('mycollection');

  // Insert one document
  collection.insertOne({
    name: 'John Doe',
    age: 30,
    city: 'New York'
  }, (err, result) => {
    if (err) throw err;

    console.log('Document inserted');
    client.close();
  });
});

2.Read (Query)
To query data from MongoDB, you can use the find method. Here’s an example:


const MongoClient = require('mongodb').MongoClient;

const url = 'mongodb://localhost:27017';
const dbName = 'mydatabase';
MongoClient.connect(url, { useNewUrlParser: true, useUnifiedTopology: true }, (err, client) => {

  if (err) throw err;
  const db = client.db(dbName);
  const collection = db.collection('mycollection');

  // Find documents
  collection.find({ city: 'New York' }).toArray((err, documents) => {

    if (err) throw err;
    console.log('Documents found:', documents);
    client.close();
  });
});

3.Update
To update data in MongoDB, you can use the updateOne or updateMany method. Here’s an example using updateOne:

const MongoClient = require('mongodb').MongoClient;
const url = 'mongodb://localhost:27017';
const dbName = 'mydatabase';
MongoClient.connect(url, { useNewUrlParser: true, useUnifiedTopology: true }, (err, client) => {

  if (err) throw err;
  const db = client.db(dbName);
  const collection = db.collection('mycollection');

  // Update one document
  collection.updateOne(
    { name: 'John Doe' },
    { $set: { age: 31 } },
    (err, result) => {
      if (err) throw err;

      console.log('Document updated');
      client.close();
    }
  );
});

4.Delete
To delete data in MongoDB, you can use the deleteOne or deleteMany method. Here’s an example using deleteOne:


const MongoClient = require('mongodb').MongoClient;
const url = 'mongodb://localhost:27017';
const dbName = 'mydatabase';
MongoClient.connect(url, { useNewUrlParser: true, useUnifiedTopology: true }, (err, client) => {

  if (err) throw err;
  const db = client.db(dbName);
  const collection = db.collection('mycollection');

  // Delete one document
  collection.deleteOne({ name: 'John Doe' }, (err, result) => {

    if (err) throw err;
    console.log('Document deleted');
    client.close();
  });
});</code.

Make sure to replace the connection URL, database name and collection name with your specific values. Additionally, handle errors appropriately in a production environment.

The post How to perform Create, Read, Update, and Delete operations using MongoDB appeared first on Exatosoftware.

]]>
17409
Data Audits and Testing for maintaining Data on AWS https://exatosoftware.com/data-audits-and-testing-for-maintaining-data-on-aws/ Wed, 20 Nov 2024 08:57:44 +0000 https://exatosoftware.com/?p=16808 Conducting a data audit and testing for maintaining data on AWS involves several key steps to ensure data integrity, security, and compliance. 1. Define Objectives and Scope: – Clearly define the objectives of the data audit and testing process. – Determine the scope of the audit, including the AWS services and data sources to be […]

The post Data Audits and Testing for maintaining Data on AWS appeared first on Exatosoftware.

]]>

Conducting a data audit and testing for maintaining data on AWS involves several key steps to ensure data integrity, security, and compliance.
1. Define Objectives and Scope:
– Clearly define the objectives of the data audit and testing process.
– Determine the scope of the audit, including the AWS services and data sources to be assessed.
Example: Objective – Ensure compliance with GDPR regulations for personal data stored on AWS. Scope – Audit all databases and storage buckets containing customer information.

2. Inventory Data Assets:
– Identify all data assets stored on AWS, including databases, files, logs, and backups.
– Document metadata such as data types, sensitivity levels, ownership, and access controls.
Example: Identify databases (e.g., Amazon RDS instances), storage buckets (e.g., Amazon S3), and log files (e.g., CloudWatch Logs) storing customer data, including their types (e.g., names, addresses, payment details), sensitivity levels, and ownership.

3. Assess Data Quality:
– Evaluate the quality of data stored on AWS, including completeness, accuracy, consistency, and timeliness.
– Use data profiling and analysis tools to identify anomalies and discrepancies.
Example: Use data profiling tools to analyze customer data for completeness (e.g., missing fields), accuracy (e.g., erroneous entries), consistency (e.g., format discrepancies), and timeliness (e.g., outdated records).

4. Evaluate Security Controls:
– Review AWS security configurations, including Identity and Access Management (IAM), encryption, network security, and access controls.
– Ensure compliance with relevant standards and regulations such as GDPR, HIPAA, or SOC 2.
Example: Review IAM policies to ensure that only authorized personnel have access to sensitive data. Verify that encryption is enabled for data at rest (e.g., using AWS Key Management Service) and in transit (e.g., using SSL/TLS).

5. Review Data Governance Practices:
– Assess data governance policies and procedures, including data classification, retention, and deletion policies.
– Review data access and authorization processes to ensure appropriate permissions are enforced.
Example: Assess data classification policies to ensure that customer data is appropriately categorized based on its sensitivity level (e.g., public, internal, confidential). Review data retention policies to determine if customer data is retained only for the necessary duration.

6. Perform Compliance Checks:
– Conduct compliance assessments against industry standards and regulations applicable to your organization.
– Implement AWS Config rules or third-party compliance tools to monitor compliance continuously.
Example: Use AWS Config rules to check if encryption is enabled for all S3 buckets containing customer data. Perform periodic audits to ensure that the organization complies with GDPR requirements regarding data processing and storage.

7. Data Protection and Privacy Review:
– Evaluate mechanisms for data protection, such as encryption in transit and at rest, data masking, and tokenization.
– Ensure compliance with data privacy regulations, such as GDPR or CCPA, by reviewing data handling practices and consent mechanisms.
Example: Verify that sensitive customer data is pseudonymized or anonymized to protect privacy. Ensure that access controls are in place to restrict access to customer data to only authorized personnel.

8. Conduct Vulnerability Assessments:
– Perform vulnerability scans on AWS infrastructure and applications to identify security weaknesses.
– Remediate vulnerabilities promptly to mitigate potential security risks.
Example: Run vulnerability scans using AWS Inspector or third-party tools to identify security weaknesses in EC2 instances and other AWS resources. Remediate vulnerabilities such as outdated software versions or misconfigured security groups.

9. Test Disaster Recovery and Backup Procedures:
– Validate disaster recovery and backup procedures to ensure data resilience and availability.
– Perform regular backup tests and drills to verify recovery time objectives (RTOs) and recovery point objectives (RPOs).
Example: Simulate a scenario where a critical database becomes unavailable and verify the organization’s ability to restore data from backups stored in Amazon S3. Measure the time taken to recover and ensure it meets the organization’s RTO and RPO objectives.

10. Document Findings and Recommendations:
– Document audit findings, including identified issues, vulnerabilities, and areas for improvement.
Example: Document findings such as unencrypted data storage and inadequate access controls. Provide recommendations such as implementing encryption and enforcing least privilege access.
11. Implement Remediation Actions:
– Prioritize and implement remediation actions based on the audit findings and recommendations.
– Monitor the effectiveness of remediation efforts to ensure issues are adequately addressed.
Example: Update IAM policies to enforce the principle of least privilege, ensuring that only necessary permissions are granted to users. Enable encryption for all relevant AWS services and enforce encryption policies.

12. Continuous Monitoring and Review:
– Establish mechanisms for continuous monitoring of data assets on AWS.
– Regularly review and update data audit and testing procedures to adapt to evolving threats and compliance requirements.
– Provide recommendations for enhancing data security, compliance, and governance practices.

12. Continuous Monitoring and Review:
– Establish mechanisms for continuous monitoring of data assets on AWS.
– Regularly review and update data audit and testing procedures to adapt to evolving threats and compliance requirements.

Example: Set up AWS CloudWatch alarms to monitor security-related events, such as unauthorized access attempts or changes to security group configurations. Regularly review audit logs and adjust security controls based on emerging threats or changes in compliance requirements.
By following these steps, organizations can effectively conduct data audits and testing to maintain data integrity, security, and compliance on AWS. Additionally, leveraging automation and AWS-native tools can streamline the audit process and enhance its effectiveness.

The post Data Audits and Testing for maintaining Data on AWS appeared first on Exatosoftware.

]]>
16808