Q.How do you Generate sequence at the time of Display?
Answer: By Using CSUM 
Q.How do you Generate Sequence in Teradata?

Answer:  By Using Identity Column
1-for storing purpose using identity.
2-for display purpose using csum. 
Q.How do you load Multiple files to a table by using fast load scripts? 
Remove End Answer:  Loading statement in the script and Replace the file one by one in the script till last file and submit every time so that data appended in Amp Level.For the last file specify End Loading statement in the script and Run.so that it runs from Amp to table.

Q.What is the difference between FastLoad and MultiLoad?

Answer:  FastLoad uses multiple sessions to quickly load large amount of data on empty table.

MultiLoad is used for high-volume maintenance on tables and views. It works with non-empty tables also. Maximum 5 tables can be used in MultiLoad.

Q.Which is faster?

Answer:  FastLoad.

Q.Difference between Inner join and outer join?

Answer:  An inner join gets data from both tables where the specified data exists in both tables.

An outer join gets data from the source table at all times, and returns data from the outer joined table ONLY if it matches the criteria.

Q.What is multi Insert?

Answer: Inserting data records into the table using multiple insert statements. Putting a semi colon in front of the key word INSERT in the next statement rather than terminating the first statement with a semi colon achieves it.

Insert into Sales “select * from customer”

; Insert into Loan “select * from customer”;

Q.Is multi insert ANSI standard?

Answer:  No.

Q.How do you create a table with an existing structure of another table with data and with no data?

Answer:  Create table Customerdummy as Customer with data / with no data;

Q.What is the opening step in Basic Teradata Query script?

Answer:  Logon tdipid/username, password.

Q.You are calling a Bteq script, which drops a table and creates a table. It will throw an error if the table does not exist. How can you do it without throwing the error?

Answer:  You can it by setting error level to zero before dropping and resetting the error level to 8 after dropping.

You can do it like this

ERRORLEVEL (3807) SEVERITY 0;

DROP TABLE EMPLOYEE;

ERRORLEVEL (3807) SEVERITY 8;

Q.Can you FastExport a field, which is primary key by putting equality on that key?

Answer:  No.

Q.Did you write stored procedures in Teradata?

Answer: No, because they become a single amp operation and my company didn’t encourage that.

Q.What is the use of having index’s on table?

Answer:  For faster record search.

Q.Did you use Query man or SQL assistance?

Answer:  SQL assistant 6.1

Q.I am updating a table in Bteq. It has to update a large number of rows, so it’s really slow. What do you suggest?

Answer:  In Teradata it is not recommended to update more than 1 million rows due to journal space problems, if it is less than that and it’s slow in the Bteq, you might want to add collect statistics statement before the update statement.

Q.Is it necessary to add? QUIT statement after a Bteq query when I am calling it in a Unix environment?

Answer:  Not necessary but it is good to add a QUIT statement after a query.

Q.There is a column with date in it. If I want to get just month how It can be done? Can I use sub string?

Answer:  Sub string is used with char fields. So it cannot be used. To extract month from a date column, ex select extract (month from <column name>). Same thing for year or day. Or hour or minutes if it’s a time stamp (select extract (minute from column name).

Q.What’s the syntax of sub string?

Answer:  SUBSTRING (string_expression, n1 [n2])

Q.Did you use CASE WHEN statement. Can you tell us a little about it?

Answer:  Yes. When a case has to be selected depending upon the value of the expression.

Q.While creating table my DBA has FALLBACK or NO FALLBACK in his DDL. What is that?

Answer:  FALLBACK requests that a second copy of each row inserted into a table be stored on another AMP in the same cluster. This is done when AMP goes down or disk fails.

Q.My table got locked during MLOAD due to a failed job. What do I do to perform other operations on it?

Answer:  Using RELEASE MLOAD. It removes access locks from the target tables in Teradata. It must be entered from BTEQ and not from MultiLoad.

To proceed, you can do RELEASE MLOAD <table name>

Q.How to find duplicates in a table?

Answer:  Group by those fields and select id, count(*) from table group by id having count (*) > 1

Q.How to you verify a complicated SQL?

Answer:  I use explain statement to check if the query is doing what I wanted it to do.

Q.How many tables can you join in V2R5

Answer:  Up to 64 tables.

Q.Did u ever use UPPER Function?

Answer:  UPPER Function is used to convert all characters in a column to the same characters in upper case.

Q.What does a LOWER Function do?

Answer:  LOWER function is used to convert all characters in a column to the lower case characters.

Q.How do you see a DDL for an existing table?

Answer:  By using show table command.

Q.Which is more efficient GROUP BY or DISTINCT to find duplicates?

Answer:  With more duplicates GROUP BY is more efficient, if only a few duplicates exist DISTINCT is more efficient.

Q.Syntax for CASE WHEN statement?

Answer:  CASE value_expression_1 WHEN value_expression_n THEN scalar_expression_n END;

Q.What’s the difference between TIMESTAMP (0) and TIMESTAMP (6)?

Answer:  TIMESTAMP (0) is CHAR (19) and TIMESTAMP (6) is CHAR (26)

Everything is same except that TIMESTAMP (6) has microseconds too.

Q.How do you determine the number of sessions?

Answer:  Teradata performance and workload

Client platform type, performance and workload

Channel performance for channel attached systems

Network topology and performance for network attached systems.

Volume of data to be processed by the application.

Q.What is node? How many nodes and AMPs used in your previous project?

Answer:  Node is a database running in a server. We used 318 nodes and each node has 2 to 4 AMPS.

Q.What is a clique?

Answer:  Clique is a group of disk arrays physically cabled to a group of nodes.

Q.Interviewer explained about their project (Environment, nature of work)

Answer:  Listen to them carefully so that at the end of the interview you can ask questions about the project when you are given a chance to ask questions.

Q.Tell us something about yourself?

Answer:  Describe about your project experience, technical skill sets, hard working, good team player, self-learner and self-motivated.

Q.What is the best project you ever worked with and why it is best project?

Answer:  All the projects I worked so far are best projects. I treat every project is equal and work hard for the success of the project.

Q.What makes a project successful and how you have contributed to the success of the project?

Answer:  Good team members, technical knowledge of team members, hard work, sharing knowledge among the team, individual’s contribution to the project. Explain them that you posses all the skills you mentioned above.

Q.Have you worked under stress and how did you handle it?

Answer:  Yes. Many times to deliver the project on schedule, we were under lot of pressure. During those times we work extra hours and help each other in the team to deliver the project on schedule. Team effort is key factor for the success of the project.

Q.What is the difference between FastLoad and MultiLoad?

Answer:  FastLoad uses multiple sessions to quickly load large amount of data on empty table.

MultiLoad is used for high-volume maintenance on tables and views. It works with non-empty tables also. Maximum 5 tables can be used in MultiLoad.

Q.Have you used procedures?

Answer:  No. I have not used procedures. But I have expertise knowledge writing procedures. My company have not encouraged me to write procedures because it becomes single AMP operation, as such uses lot of resources and expensive in terms of resource and time.

Q.What is the purpose of indexes?

Answer:  An index is a mechanism that can be used by the SQL query optimizer to make table access more performant. Indexes enhance data access by providing a more-or-less direct path to stored data and avoiding the necessity to perform full table scans to locate the small number of rows you typically want to retrieve or update.

Q.What is primary index and secondary index?

Answer:  Primary index is the mechanism for assigning a data row to an AMP and a location on the AMP’s disks. Indexes also used to access rows from a table without having to search the entire table.

Secondary indexes enhance set selection by specifying access paths less frequently used than the primary index path. Secondary indexes are also used to facilitate aggregate operations. If a secondary index covers a query, then the Optimizer determines that it would be less costly to accesses its rows directly rather than using it to access the base table rows it points to. Sometimes multiple secondary indexes with low individual selectivity can be overlapped and bit mapped to provide enhanced

 Q.What are the things to considered while creating secondary index?

Answer:  Creating a secondary index causes Teradata to build a sub-table to contain its index rows, thus adding another set of rows that requires updating each time a table row is inserted, deleted, or updated. Secondary index sub-tables are also duplicated whenever a table is defined with FALLBACK, so the maintenance overhead is effectively doubled.

Q.What is collect statistics?

Answer:  Collects demographic data for one or more columns of a table, hash index, or join index, computes a statistical profile of the collected data, and stores the synopsis in the data dictionary. The Optimizer uses the synopsis data when it generates its table access and join plans.

Q.Can we collect statistics on multiple columns?

Answer:  Yes we can collect statistics on multiple columns.

Q.Can we collect statistics on table level?

Answer:  Yes we can collect statistics on table level. The syntax is COLLECT STAT ON TAB_A;

Q.What is inner join and outer join?

Answer:  An inner join gets data from both tables where the specified data exists in both tables.

An outer join gets data from the source table at all times, and returns data from the outer joined table ONLY if it matches the criteria.

Q.When Tpump is used instead of MultiLoad?

Answer:  TPump provides an alternative to MultiLoad for the low volume batch maintenance of large databases under control of a Teradata system. Instead of updating Teradata databases overnight, or in batches throughout the day, TPump updates information in real time, acquiring every bit of data from the client system with low processor utilization. It does this through a continuous feed of data into the data warehouse, rather than the traditional batch updates. Continuous updates results in more accurate, timely data. And, unlike most load utilities, TPump uses row hash locks rather than table level locks. This allows you to run queries while TPump is running. This also means that TPump can be stopped instantaneously. As a result, businesses can make better decisions that are based on the most current data.

Q.What is spool space and when running a job if it reaches the maximum spool space how you solve the problem?

Answer:  Spool space is used to hold intermediate rows during processing, and to hold the rows in the answer set of a transaction. Spool space reaches maximum when the query is not properly optimized. Use appropriate conditions in WHERE clause of the query to limit the answer set.

Q.What is your level of expertise in using MS office suite?

Answer:  Expert level. Using it for last 8 years for documentation.

Q.Have you used Net meeting?

Answer:  Yes. Used net meeting for team meeting when members of the team geographically in different locations.

Q.Do you have any question?

Answer:  What is the team size going to be? What is the current status of the project? What is the project schedule?

Q.What is your available date?

Answer:  Immediate. Or your available date for the project.

Q.How much experience you have in MVS?

Answer:  Intermediate. In my previous two projects used MVS to submit JCL jobs.

Q.Have you created JCL script from scratch?

Answer:  Yes. I have created JCL scripts from scratch while creating jobs in the development environment.

Q.Have you modified any JCL script and used?

Answer:  Yes I have modified JCL scripts. In my previous projects many applications were re-engineered so the existing JCL scripts were modified according to the company coding standards.

Q.Rate yourself on using Teradata tools like BTEQ, Query man, FastLoad, MultiLoad and Tpump!

Answer:  Intermediate to expert level. Extensively using for last 4 years. Also I am certified in Teradata.

Q.Which is your favorite area in the project?

Answer:  I enjoy every working on every part of the project. Volunteer my time for my peers so that I can also learn and contribute more towards the project success.

Q.What is data mart?

Answer:  A data mart is a special purpose subset of enterprise data used by a particular department, function or application. Data marts may have both summary and details data, however, usually the data has been pre aggregated or transformed in some way to better handle the particular type of requests of a specific user community. Data marts are categorized as independent, logical and dependant data marts.

Q.Difference between star and snowflake schemas?

Answer:  Star schema is De-normalized and snowflake schema is normalized.

 

Q.Why are OLTP database designs not generally a good idea for a Data Warehouse?

Answer:  OLTP designs are for real time data and they are not normalized and pre-aggregated. They are not good for decision support systems.

Q.What type of Indexing mechanism do we need to use for a typical data warehouse?

Answer:  Primary Index mechanism is the ideal type of index for data warehouse.

Q.What is VLDB?

Answer:  Very Large databases. Please find more information on it.

Q.What is real time data warehousing?

Answer:  Real-time data warehousing is a combination of two things: 1) real-time activity and 2) data warehousing. Real-time activity is activity that is happening right now. The activity could be anything such as the sale of widgets. Once the activity is complete, there is data about it. Data warehousing captures business activity data. Real-time data warehousing captures business activity data as it occurs. As soon as the business activity is complete and there is data about it, the completed activity data flows into the data warehouse and becomes available instantly. In other words, real-time data warehousing is a framework for deriving information from data as the data becomes available.

Q.What is ODS?

Answer:  An operational data store (ODS) is primarily a “dump” of relevant information from a very small number of systems (often just one) usually with little or no transformation. The benefits are an ad hoc query database, which does not affect the operation of systems required to run the business. ODS’s usually deal with data “raw” and “current” and can answer a limited set of queries as a result.

Q.What is real time and near real time data warehousing?

Answer:  The difference between real time and near real time can be summed up in one word: latency. Latency is the time lag that is between an activity completion and the completed activity data being available in the data warehouse. In real time, the latency is negligible whereas in near real time the latency is a tangible time frame such as two hours.

Q.What are Normalization, First Normal Form, Second Normal Form and Third Normal Form?

Answer:  Normalization is the process of efficiently organizing data in a database. The two goals of the normalization process are eliminate redundant data (storing the same data in more than one table) and ensure data dependencies make sense (only storing related data in the table).

First normalization form:

Eliminate duplicate columns from the same table.

Create separate tables for each group of related data and identify each row with a unique column or set of columns (primary key)

Second normal form:

Removes sub set of data that apply to multiple rows of table and place them in separate table.

Create relationships between these new tables and their predecessors through the use of foreign keys.

Third normal form:

Remove column that are not dependent upon the primary key.

Q.What is fact table?

Answer:  The centralized table in a star schema is called as FACT table i.e. a table in that contains facts and connected to dimensions. A fact table typically has two types of columns: those that contain facts and those that are foreign keys to dimension tables. The primary key of a fact table is usually a composite key that is made up of all of its foreign keys. A fact table might contain either detail level facts or facts that have been aggregated (fact tables that contain aggregated facts are often instead called summary tables). In the real world, it is possible to have a fact table that contains no measures or facts. These tables are called as Factless Fact tables.

Q.What is ETL?

Answer:  Extract, transformation, and loading. ETL refers to the methods involved in accessing and manipulating source data and loading it into target database. The first step in ETL process is mapping the data between source systems and target database (data warehouse or data mart). The second step is cleansing of source data in staging area. The third step is transforming cleansed source data and then loading into the target system. Note that ETT (extract, transformation, transportation) and ETM (extraction, transformation, move) are sometimes used instead of ETL.

Q.What is ER diagram?

Answer:  It is Entity relationship diagram. Describes the relationship among the entities in the database model.

Q.What is data mining?

Answer:  Analyzing of large volumes of relatively simple data to extract important trends and new, higher level information. For example, a data-mining program might analyze millions of product orders to determine trends among top-spending customers, such as their likelihood to purchase again, or their likelihood to switch to a different vendor.

Q.What is Star schema?

Answer:  Star Schema is a relational database schema for representing multi-dimensional data. It is the simplest form of data warehouse schema that contains one or more dimensions and fact tables. It is called a star schema because the entity-relationship diagram between dimensions and fact tables resembles a star where one fact table is connected to multiple dimensions. The center of the star schema consists of a large fact table and it points towards the dimension tables. The advantages of star schema are slicing down, performance increase and easy understanding of data.

Q.What is a level of Granularity of a fact table?

Answer:  The components that make up the granularity of the fact table correspond directly with the dimensions of the data model. Thus, when you define the granularity of the fact table, you identify the dimensions of the data model. The granularity of the fact table also determines how much storage space the database requires. For example, consider the following possible granularities for a fact table:

Product by day by region

Product by month by region

The size of a database that has a granularity of product by day by region would be much greater than a database with a granularity of product by month by region because the database contains records for every transaction made each day as opposed to a monthly summation of the transactions. You must carefully determine the granularity of your fact table because too fine a granularity could result in an astronomically large database. Conversely, too coarse granularity could mean the data is not detailed enough for users to perform meaningful queries against the database.

Q.What is a dimension table?

Answer:  Dimension table is one that describes the business entities of an enterprise, represented as hierarchical, categorical information such as time, departments, locations, and products. Dimension tables are sometimes called lookup or reference tables. In a relational data modeling, for normalization purposes, country lookup, state lookup, county lookup, and city lookups are not merged as a single table. In a dimensional data modeling (star schema), these tables would be merged as a single table called LOCATION DIMENSION for performance and slicing data requirements. This location dimension helps to compare the sales in one region with another region. We may see good sales profit in one region and loss in another region. If it is a loss, the reasons for that may be a new competitor in that area, or failure of our marketing strategy etc.

Q.What are the various Reporting tools in the Market?

Answer:  Crystal reports, Business objects, micro strategy and etc.,

Q.What are the various ETL tools in the Market?

Answer:  Ab Initio, Informatica and etc.,

Q.What is a three-tier data warehouse?

Answer:  The three-tier differs from the two-tier architecture by strictly enforcing a logical separation of the graphical user interface, business logic, and data. The three-tier is widely used for data warehousing today. Organizations that require greater performance and scalability, the three-tier architecture may be more appropriate. In this architecture, data extracted from legacy systems is cleansed, transformed, and stored in high –speed database servers, which are used as the target database for front-end data access.

Q.Importance of Surrogate Key in Data warehousing?

Answer:  Surrogate Key is a Primary Key for a Dimension table. Most importance of using it is independent of underlying database. i.e. Surrogate Key is not affected by the changes going on with a database

Q.Differentiate Primary Key and Partition Key?

Answer:  Primary Key is a combination of unique and not null. It can be a collection of key values called as composite primary key. Partition Key is a just a part of Primary Key. There are several methods of partition like Hash, DB2, and Random etc. While using Hash partition we specify the Partition Key.

Q.Differentiate Database data and Data warehouse data?

Answer:  Data in a Database is Detailed or Transactional, Both Readable and Write able and current.

Data in data warehouse is detailed or summarized, storage place for historical data.

Q.What are OLAP, MOLAP, ROLAP, DOLAP and HOLAP? Examples?

Answer:

OLAP:

OLAP stands for Online Analytical Processing. It uses database tables (fact and dimension tables) to enable multidimensional viewing, analysis and querying of large amounts of data. E.g. OLAP technology could provide management with fast answers to complex queries on their operational data or enable them to analyze their company’s historical data for trends and patterns.

MOLAP:

Stands for Multidimensional OLAP. In MOLAP cubes the data aggregations and a copy of the fact data are stored in a multidimensional structure on the Analysis Server computer. It is best when extra storage space is available on the Analysis Server computer and the best query performance is desired. MOLAP local cubes contain all the necessary data for calculating aggregates and can be used offline. MOLAP cubes provide the fastest query response time and performance but require additional storage space for the extra copy of data from the fact table.

ROLAP:

Stands for Relational OLAP. In ROLAP cubes a copy of data from the fact table is not made and the data aggregates are stored in tables in the source relational database. A ROLAP cube is best when there is limited space on the Analysis Server and query performance is not very important. ROLAP local cubes contain the dimensions and cube definitions but aggregates are calculated when they are needed. A ROLAP cube requires less storage space than MOLAP and HOLAP cubes.

HOLAP:

Stands for Hybrid OLAP. A HOLAP cube has a combination of the ROLAP and MOLAP cube characteristics. It does not create a copy of the source data however; data aggregations are stored in a multidimensional structure on the Analysis Server computer. HOLAP cubes are best when storage space is limited but faster query responses are needed

DOLAP:

Q.What is OLTP?

Answer:  OLTP stands for Online Transaction Processing. OLTP uses normalized tables to quickly record large amounts of transactions while making sure that these updates of data occur in as few places as possible. Consequently OLTP database are designed for recording the daily operations and transactions of a business. E.g. a timecard system that supports a large production environment must record successfully a large number of updates during critical periods like lunch hour, breaks, startup and close of work.

Q.What is staging area?

Answer:  The data staging area is a system that stands between the legacy systems and the analytics system, usually a data warehouse and sometimes an ODS. The data staging area is considered the “back room” portion of the data warehouse environment. The data staging area is where the extract, transform and load (ETL) takes place and is out of bounds for end users. Some of the functions of the data staging area include:

Extracting data from multiple legacy systems

Cleansing the data, usually with a specialized tool

Integrating data from multiple legacy systems into a single data warehouse

Transforming legacy system keys into data warehouse keys, usually surrogate keys

Transforming disparate codes for gender, marital status, etc., into the data warehouse standard

Transforming the heterogeneous legacy data structures to the data warehouse data structures

Loading the various data warehouse tables via automated jobs in a particular sequence through the bulk loader provided with the data warehouse database or a third-party bulk loader

Q.What is subject area?

Answer:  Subject area means fundamental entities that make up the major components of the business, e.g. customer, product, employee.

61.

Q.What is tenacity?

Answer:  Number of hours Teradata utility will try to establish a connection to the system. Default is 4 hours.

Q.What is a checkpoint?

Answer:  Checkpoints are entries posted to a restart log table at regular intervals during the data transfer operation. If processing stops while a job is running, you can restart the job at the most recent checkpoint.

Q.What is slowly changing dimension?

Answer:  In a slowly changing dimension the attribute for a record varies over time. There are three ways to solve this problem.

Type 1 – Replace an old record with a new record. No historical data available.

Type 2 – Keep the old record and insert a new record. Historical data available but resources intensive.

Type 3 – In the existing record, maintain extra columns for the new values.

Q.What is sleep?

Answer:  Number of minutes the Teradata utility will wait between logon attempts. Default is 6 minutes.

Q.Difference between MultiLoad and TPump?

Answer:  Tpump provides an alternative to MultiLoad for low volume batch maintenance of large databases under control of a Teradata system. Tpump updates information in real time, acquiring every bit of a data from the client system with low processor utilization. It does this through a continuous feed of data into the data warehouse, rather than the traditional batch updates. Continuous updates results in more accurate, timely data. Tpump uses row hash locks than table level locks. This allows you to run queries while Tpump is running.

Q.Different phases of MultiLoad?

Answer:  Preliminary phase

DML phase

Acquisition phase

Application phase

End phase

Q.Explain modifier!

Answer:The explain modifier generates an English translation of the parser’s plan. It is fully parsed and optimized but not executed. Explain returnsText showing how a statement will be processed.As estimate of how many rows will be involvedA relative cost of the request in units of time.This information is useful for predicting row counts, predicting performance, testing queries before production and analyzing various approaches to a problem.

Q.Difference between oracle and Teradata warehouse!

Answer:  Teradata can handle multi tera bytes of data. Teradata is linearly expandable, uses matured optimizer, shared nothing architecture. Uses data parallelism.

The Teradata DBA’s never have to reorganize data or index space, pre-allocate table/index space, format partitions, tune buffer space, ensure the queries run in parallel, pre-process data for loading and write or run programs to split the input data into partitions for loading.

Q.What is dimensional modeling?

Answer:  Dimensional Data Modeling comprises of one or more dimension tables and fact tables. Good examples of dimensions are location, product, time, promotion, organization etc. Dimension tables store records related to that particular dimension and no facts (measures) are stored in these tables

Q.How will you solve the problem that occurs during update?

Answer:  When there is an error during the update process, an entry is posted in the error log table. Query the log table and fix the error and restart the job.

Q.Can you connect MultiLoad from Ab Initio?

Answer:  Yes we can connect.

Q.What interface is used to connect to windows based applications?

Answer:  WinCLI interface.

Q.What is data warehousing?

Answer:  A data warehouse is a subject oriented, integrated, time variant, non-volatile collection of data in support of management’s decision-making process.

Q.What is data modeling?

Answer:  A Data model is a conceptual representation of data structures (tables) required for a database and is very powerful in expressing and communicating the business requirements.

Q.What is logical data model?

Answer:  A Logical data model is the version of a data model that represents the business requirements (entire or part) of an organization and is developed before the physical data model. A sound logical design should streamline the physical design process by clearly defining data structures and the relationships between them. A good data model is created by clearly thinking about the current and future business requirements. Logical data model includes all required entities, attributes, key groups, and relationships that represent business information and define business rules.

Q.Tell us something about data modeling tools?

Answer:  Data modeling tools to transform business requirements into logical data model, and logical data model to physical data model. From physical data model, these tools can be instructed to generate SQL code for creating database entities.

Q.Steps to create a data model?

Answer:  Get business requirements.

Create High Level Conceptual Data Model.

Create Logical Data Model.

Select target DBMS where data-modeling tool creates the physical schema.

Create standard abbreviation document according to business standard

Q.What is the maximum number of DML can be coded in a MultiLoad script?

Answer:  Maximum 5 DML can be coded in a MultiLoad script.

Q.There is a load to the Table every one hour and 24/7.Morning trafic is high, afternoon trafiic is less, Night trafiic is high.According to this situation which Utility you use and how do you load,which utility used?
Answer:  Tpump is suggestable here
By Using packet size increasing or decreasing we can handle traffic.
Q.Fast Load Script is failed and error tables are available then how do you restart?
Answer:  There are 2 ways
1.Incase of Old file to Run
Dont drop error tables simply rectify the error in the script or file and Run again.so that it runs from last configuration.
2.Incase of new file to Run
Drop Error tables
Try to run the script with only begin and end loading statements. ,so that it releases the lock on target table If possible remove the record from fastlog table.
Run the script with new file freshly.
Q.What are the advantages of other ETL tools(Informatica,DataStage,SSIS etc…) than Teradata Utilities or vice versa? 
Answer:  TD Utilities run faster than other ETL tools incase of File to Table(Import) or Table to File (Export) Operations. Inmod and Outmod also help us to do better programing while Importing and Exporting. 
Q.Major advantages of Other ETL tools than TD Utilities:

Answer:  1.Heterogeneous Sources and Destinations we can Operate. (Oracle,Sqlserver,Excel,Flat Files etc….)
2.As they are providing full GUI support,degugging is so easier.
3.Reusability of components(Informatica:mapplets,worklets etc…) available in ETL tools so if we change in main location automatically other applications(mappings) which are using these components can update instantly. 4.Pivoting(Normalization),Unpivot(Denormalization) we can implement very easily in ETL tools.
5.Caching is one more advantage when we work with a source (Heterogenous) which is not changing frequently.Some times cache can be shared across applications etc….
Q.Aborted in Phase 2 data acquisition completed in fastload?
Answer:
  Simply take Begin and End loading in scripts and Run again.so that it runs from Amp to table.
Q.How do you Generate sequence at the time of Display?
Answer:  By Using CSUM 
Q.How do you Generate Sequence in Teradata?

Answer:  By Using Identity Column
1-for storing purpose using identity.
2-for display purpose using csum. 
Q.How do you load Multiple files to a table by using fast load scripts?

Answer:  Remove End Loading statementin the script and Replace the file one by one in the script till last file and submit every time so that data appended in Amp Level.For the last file specify End Loading statement in the script and Run.so that it runs from Amp to table.
Q.Why Multiload and Fastload does not supports SI,JI,RI and TRIGGERS? 

Answer:  Above all concepts require communication between Multiple Amps. As per Fastload and Multiload doesnot provide any communication b/w multiple Amps and they Operate Independently.As concept is violating as well as it takes time to implement the above operation they are not allowing them.

Q.Why Multiload does not supports USI and supports NUSI? 

Answer:  Index subtable row is on the same Amp as the data row in NUSI.so it operates each Amp independently and in parallel.
Q.While Executing MLOAD Client System Restarted? 

Answer:  We need to Manualy Submit the script,so that it loads data from last checkpoint.
Q.While Executing MLOAD Teradata Server Restarted?
Answer:  Along with the server MLAOD script will also restarted and Runs from last check point.
Q.There is file it contains 100 records,need to load 60 records by skipping first 20 records and last 20 records.
Answer:  use BTEQ Utility to do this task by Skip = 20 and Repeat 60 in script.

Q.How to see current teradata version?
Answer:  .SHOW VERSION
Q.What is Node? 

Answer:  A Node is nothing but Collection of Hardware and Software components.Typically a Server is called as node.
Q.What is PDE?
 
Answer:  Parallel Data Extension
A software interface layer on top of the operating system that enables the database to operate in a parallel environment. It was created by NCR to support the parallel environment.
Q.What is Trusted parallel database (TPD)?

Answer:   A database is called TPD if it runs under PDE.
Teradata is a database which runs under PDE.so we call Teradata as pure Parallel database or Trusted parallel database.
Q.What is channel driver?
 
Answer:  A Software that communicates between the PEs and applications running on channel-attached clients.
Q.What is Teradata Gateway?

Answer:  Terdata Gateway software gives communication between the application and the PEs assigned to network-attached clients. There is one Gateway per node.
Q.What is Virtual disk?

Answer:  A collection of Cylinders(physical disks) arranged in an Array fashion is called Vdisk or Virtual disk.Traditionally this is called as disk Array or Array of disk.
Q.What is Amp?
Answer:  Access Module ProcessorIt is a Virtual processor responsible for managing one portion of the database(collection of virtual
disks).This portion is not sharable by any other AMP.so well call this Architecture as shared nothing Architecture.
Amp contains Database Manager subsystem and it performs the below operations
a.Performing DDL
b.Performing DML
c.Implementing Joins,Aggregations.
d.Applying and Releasing Locks etc.
Q.What is Parsing Engine?
Answer:  PE is type of Vproc it takes SQL request and delivers SQL response.
It has software components to break SQL into steps, and send the steps to the AMPs.
Session Control
A session is nothing but logical connection between User and Application.Here it controls Authorization if its a Valid it does log on otherwise log off.
Parser
checks syntactical errors
checks semantical errors
checks existence of objects.
Dispatcher
It takes set of request and keep in a queue ,delivers set of responses by keeping the same queue that means it does request response flow control.
Q.How many MAX session PE handles at a time?
Answer:  PE handles MAX 120 sessions.

Q.open batch session got failed because of the following

error.WRITER_1_*_1> WRT_8229 Database errors occurred:

FnName: Execute — [NCR][ODBC Teradata][Teradata Database] Duplicate unique prime key error in CFDW2_DEV_CNTL.CFDW_ECTL_CURRENT_BATCH.

FnName: Execute — [DataDirect][ODBC lib] Function sequence error

Answer:  When ever you want to open a fresh batch id, first of all you should close the existing batch id and open a fresh batch id.

Q.source is Flat file and I am staging the this flat file in teradata.

I found that the initial zero’s are truncating in teradata. What could be the reason.

Answer: The reason is that in teradata you are defined the column datatype as Integer. That’s why initial values are truncating. So, change the target table data type to VARCHAR. VARCHAR datatype it won’t trucate the initial zero’s.

Q.Can`t determine current batch ID for Data Source 47

Answer: For any fresh stage load you should open a batch id for the current data source id.

Q.Unique Primary key violation CFDW_ECTL_CURRENT_BATCH table.

Answer:In CFDW_ECTL_CURRENT_BATCH table unique primary key defined on ECTL_DATA_SRCE_ID,

ECTL_DATA_SRCE_INST_ID  columns. At any point of time you shold have only one record for ECTL_DATA_SRCE_ID,

ECTL_DATA_SRCE_INST_ID   columns.

Q.can’t insert a NULL value in a NOT NULL column.

Answer: First find all the NOT NULL columns in a target table and cross verify with the corresponding source columns and identify for which source column you are getting NULL value and take necessary action.

Q.source is Flat file and I am staging the this flat file in teradata.

I found that the initial zero’s are truncating in teradata. What could be the reason.

Answer:The reason is that in teradata you are defined the column datatype as Integer. That’s why initial values are truncating. So, change the target table data type to VARCHAR. VARCHAR datatype it won’t trucate the initial zero’s.

Q.I am passing one record to target look up but the look up is not returning the matching record.I know that the record is present in loo up. What action you will take ?

Answer: use LTRIM,RTRIM in look up sql override.this will remove the unwanted blank spaces. Then look up will find the matching record in look up.

Q.I am getting duplicate records for natural key (ECTL_DATA_SRCE_KEY) what will you do to eliminate duplicate records natural key.

Answer:we will concatenate 2 ,3 or more source columns and check for duplicate records. If you are not getting duplicates after concatenating then use those columns to populate ECTL_DATA_SRCE_KEY column in target.

Q.Accti_id is a Not null column in AGREEMENT table. You are getting a NULL value from CFDW_AGREEMENT_XREF look up ? what will you do to eliminate NULL records.

Answer:After stage load, I will populate CFDW_AGREEMENT_XREF table (this table basically contain surrogate keys). Once you populate XREF table then you won’t get any NULL records

Accti_id column.

Q.Unique primary key violation on CFDW_ECTL_BATCH_HIST table.

Answer:In CFDW_ECTL_BATCH_HIST table Unique primary index defined on ectl_btch_id  column. So, there should be only one uniue record for a ectl_btch_id  column.

Q.when will you use ECTL_PGM_ID column in target look up sql overirde ?

Answer: when you are populating a single target table (AGREEMENT table)  from multiple mappings in the same informatica folder then we will use ECTL_PGM_ID in taget look up sql override. This will eliminate unnecessary updating records.

Q.you are defined the primary keys as per the ETL spec but you are getting the duplicate records. How will you handle.

Answer: Apart from the primary key columns in the spec,First I will add any other column (other primary key columns in spec) as the primary key and I will check for the duplicate records. If I didn’t get any duplicates, I will ask modeller to add this column as the primary key.

Q.In teradata the error is mentioned as: “no more room in database”

Answer:I spoke with DBA to add the space for that database.

Q.Though the column is available in target table, when I am trying to load using Mload, it shows that tahe column is not available in the table. Why?

Answer:As the loading process was happening through a view and the view was not refreshed to add the new column, it was the error message. So, refresh the view definition to add the new column.

Q.when deleting the target table, though I wante to delete some data from the target table, by mistake all the data got deleted from Development table.

Answer:Add ECTL_DATA_SRCE_ID and PGM_ID in the where clause of the query.

Q.While updatating the target table, it shows an error message saying multiple rows are trying to update a single row.

Answer:There are duplicates available in the table matching the Where condition of the update qurey. These duplicate records need to be eliminated.

Q.I have a file with header, data records and trailer. Data record is delimited with comma and header and trailer are fixed width. The header and trailer starts with (HDR,TRA).
I need to avoid the header and trailer while loading the file with Multiload. Please help me in this case.

Answer:Code Mload utility to consider only the data records excluding the header and trailer records.

Q.What is BYNET?
Answer: It acts like a “Message Communication” happens between Components.It is responsible for
1. Sending messages
2. Merging data
3.Sorting data
Q.What is Clique?
Answer: It prevents from Node Failure.
1..A Clique is a collection of Nodes will Share Common Disk drives.
2.whenever any node went down automatically Vprocs all migrate from fail node to other node to retrieve data from common disk drives.
Q.List different types of LOCKS in teradata?
Answer: Teradata can apply four types of LOCKS
a.Access Lock
b.Read lock
c.Write Lock
d.Exclusive Lock 
Q.At what level teradata can apply a LOCK?

Answer:  1.Database level lock — All objects inside database will be locked.
2.Table level — All rows inside a table will be locked.
3,Row hash level lock– Only Corresponding row will be locked.
Q.How many AMPs involved in case of Primary Index?
 
Answer: Its always one AMP.

Q.What about UPSERT command in teradata? 
Answer: UPSERT means update else insert.
In Teradata we have this option.
Q.What is Advantage of PPI?
Answer: Mainly we use for Range based data storing or category based Data storing.
Range queries don’t have to utilize a Full Table Scan.Its directly goes to the corresponding partition and skips other partitions.
Fastload and Multiload work with PPI tables, but not with all Secondary Indexes. 
Q.What is the size of BYTEINT,SMALLINT,INTEGER?

BYTEINT – 1 Bytes – 8 bits — > -128 to 127
SMALLINT – 2 bytes – 16 bites: —> -32768 to 32767
INTEGER – 4 BYTES – 32 BITS —> -2,147,483,648 to 2,147,483,647 
Q.Difference between user and database in teradata?

Database                                        User
A database is Passive Repository    User is a Active.
It stores all database objects          It stores any object such as table,macro,view etc.
It does not contain Password.       It contains password
Q.Difference between primary key and Primary Index? 
Answer: Primary Index Primary Key
It is mandatory It is optional
Limit of 64 columns/table No limit
Allows Nulls & Duplicates Doesnt allows
It is physical mechanism Logical mechanism
Effects Data Distribution Does not effect Data Distribution
Q.What is the use of Spool Space?
Answer: Teradata spool space is unused Perm space that is used for running queries.Teradata recommend 20 % of the available perm space is allocated for spool space.
This is used to hold intermittent results of the queries and volatile tables.
Q.What is the need of Performance tuning?
Answer: To identify bottlenecks and to resolve it we go for Performance tuning.
Bottle neck is not an error but it causes system delay in Performance.
example: There is a query it is suppose to run in 2 mins but executed for half an hour and finally succeeded.
In this situation we need to identify bottlenecks and resolve it.
To Identify bottlenecks we go for
a.Explain Request Modifier
b.Teradata Visual Explain
c.Performance monitor
d.Teradata Manager
Q.Define Explain plan?

Answer: Explain plan displays the execution plan of SQL statement that is going to be executed by the database.This plan
will be specified by the component called optimiser.Generaly it displays below information
a.Number of Amps
b.Amount of spool memory it is occupying.
c.Number of Rows its affecting.
d.Type of Join strategy it is taking.
e.Time it takes to execute.
f.Locks it is Using etc.
Syntax : EXPLAIN
Example : EXPLAIN SEL * FROM PARTY;
Q.What is Collect statistics?Collect stats just derives the data demographics of the table.
Answer: Collect stats is an important concept in teradata, Collect stats gives PE to come up with a plan with least cost plan for an requested query.
Collect stats defines the confidence level of PE in estimating
“how many rows it is going to access ?
how many unique values does a table have , null values etc and all this info is stored in data dictionary.
Once you submit a Explain plan query in TD the parsing engine checks if the stats are available for the requested table .
If collected stats available already PE generates a plan with “high confidence” .
If Collect stats unavailable gives “low confidence” .
Syntax : COLLECT STATISTICS ON INDEX/COLUMN NAME
Q.What is Least Cost Plan?
Answer: It executes in shortest path with less time.
Q.What is Highest Cost Plan?
Answer: It executes in Longest path with more time.
Q.How many Confidence Level present?
Answer: a.Low
b.No
c.High
d.Join
Q.If collect stats is not done on the table what will happen?
Answer: Teradata uses a cost based optimizer and cost estimates are done based on statistics.
So if you donot have statistics collected then optimizer will use a Dynamic AMP Sampling method to get the stats. If your table is big and data was unevenly distributed then dynamic sampling may not get right information and yourperformance will suffer.
Q.What are the 5 phases in a MultiLoad Utility?

Answer: * Preliminary Phase – Basic Setup *
* DML Phase – Get DML steps down on AMPs*
* Data Acquisition Phase – Send the input data to the AMPs and sort it*
* Application Phase – Apply the input data to the appropriate Target Tables*
* End Phase – Basic Cleanup*
Q.What are the MultiLoad Utility limitations?
Answer: MultiLoad is a very powerful utility; it has following limitations:
* MultiLoad Utility doesn’t support SELECT statement.
* Concatenation of multiple input data files is not allowed.
* MultiLoad doesn’t support Arithmatic Functions i.e. ABS, LOG etc. in Mload Script.
* MultiLoad doesn’t support Exponentiation and Aggregator Operators i.e. AVG, SUM etc. in Mload Script.
* MultiLoad doesn’t support USIs (Unique Secondary Indexes), Refrential Integrity, Join Indexes, Hash Indexes and Triggers.
* Import task require use of PI (Primary Index).
Q.What are TPUMP Utility Limitations?
Answer: Following are the limitations of Teradata TPUMP Utility:
* Use of SELECT statement is not allowed.
* Concatenation of Data Files is not supported.
* Exponential & Aggregate Operators are not allowed.
* Arithmatic functions are not supported.
Q.Explain Teradata Competetive Advantages detail?
Answer:
1.Automatic,Even Data Distribution
In Teradata Even uniform or Parallel or Random distrubution is Automatic.
2.High Scalability
If you Increase the number of Nodes or users or work teradata does not sacrifice any performance and it scales Linearly.we are calling this scalabilty as linear scalability.
3.Mature Optimizer
As we are having powerful optimiser in teradata it supports 64 joins/query, 64 subquery/query,
formating commands and aggregate commands.
4.Models the business
Teradata supports any business models for star,snowflake schema,Hybrid schema,Normalisation etc
5.Low Cost Tco(Total cost of Ownership)
Its easy to Install,Manage,work.
Full Support Gui and Cheaper in price.
Q.How do you set the session mode parameters in BTEQ?
Answer: .set session transaction ANSI—–> this is to set ANSI mode
.set session transaction BTET—–> this is to set Teradata transaction mode.
These commands have to be entered before logging to the session.
Q.How Teradata makes sure that there are no duplicate rows being inserted when its a SET table?

Answer: Teradata will redirect the new inserted row as per its PI to the target AMP (on the basis of its row hash value), and if it find same row hash value in that AMP (hash synonyms) then it start comparing the whole row, and find out if duplicate.

If it’s a duplicate it silently skips it without throwing any error.
Q.List types of HASH functions used in teradata?

Answer: There are HASHROW, HASHBUCKET, HASHAMP and HASHBAKAMP.

The SQL hash functions are:

  • HASHROW (column(s))
  • HASHBUCKET (hashrow)
  • HASHAMP (hashbucket)
  • HASHBAKAMP (hashbucket)

Q.what is derived table?
Answer:  1..It stores intermediate results and calculations.
2.You can specify derived table in an SQL statement(Preferrably Select).

3.This table created and dropped as a part of the query.
4.Its stored under spool memory.
5.Once the query finishes execution table is not available.
6.This types of tables are called as Inline Query tables.

Q.What is Journaling? why teradata requires journaling?
Answer: Journling is a data protection mechanism in teradata.It prevents from Image failure.
Journals are generated to maintain pre-images and post images of a DML transaction starting/ending, at/from a checkpoint.
When a DML transaction fails,the table is restored back to the last available checkpoint using the journal Images.
There are 3 types of Journals
Permanent
Transient
Recovery Journal
Q.How many types of Image supports Journaling?
Answer: Four types of Images are supported by Journaling they are
a. Single Image—->one copy of data will be taken.
b.Dual Image—–>Two copies of data will be taken.
c.Before Image—–>Before changes occur on the row data will be taken.
d.After Image—->After changes happen on the row taking copy of data.
Q.What is Transient Journal?
Answer: Transient Journal – an area of space in the DBC database which is used primarily for storing of roll-back information during inserts/deletes/updates of tables.
Detail Explanation:
The Transient Journal maintains a copy of all before images of all rows affected
by the transaction.
If the event of transaction failure, the before images are reapplied to the affected tables, the images are deleted from the journal and a rollback operation is completed.
In the event of transaction success, at the point of transaction commit,
the before images for the transaction are discarded from the journal.
In Summary, if a Transaction fails (for whatever reason), the
before images in the transient journal are used to return the data (in the
tables involved in the transaction) to its original state.
Q.What is Permanent Journal?
Answer: The purpose of the Permanent Journal is to provide selective or full database recovery to a specified point in time.It permits recovery from unexpected hardware or software disasters.
The Permanent Journal also has the effect of reducing the need for full table backups which can be costly both in time and resource.
Q.What are the different image options for Permanent Journal?
Answer: There are four image options for the Permanent Journal:
· Before Journal
· After Journal
· Dual Before Journal
· Dual After Journal
Q.Permanent Journals are automatically purged in teradata. True / False?
Answer:  False.
The Permanent Journal must be manually purged from time to time.
Q.Where does TD store transient journal?
Answer: In perm space -> dbc.transient journal
But that special table can grow over dbc’s perm limit, until the whole system runs out of perm space.
Q.What are the different return codes(severity errors) in Teradata utilities?
Answer: There are 3 basic return codes (severity errors) in teradata utilities.
4 – Warning
8 – User error
12 – System error
Q.How will you connect a database server to other server?
Answer: We can connect from one server to another server in UNIX using the command
ssh or FTP or SU
ssh – ssh user_id@server_name
Q.What is the meaning of skewness in Teradata?
Answer: Data or Amp skew occurs in teradata due to uneven distribution of data across all the amps. Often this leads to spool space error too. To avoid skewness try to select a Primary Index which has as many unique values as possible.
PI columns like month day etc. will have very few unique values. So during data distribution only a few amps will hold all the data resulting in skew. If a column (or a combination of columns) is chosen a PI which enforces uniqueness on the table then the data distribution will be even and the data will not be skewed.
Q.Does Primary Index column choosing is important?
Answer: The Success of teradata warehouse starts with choosing of correct column
to creation of Primary index.
Try to Choose a column which has unique values. so that data can be distributed evenly among all AMPs.
Otherwise Skewness will come into picture.
Primary index is useful to get a straight path to retrieve data.
Q.What are the basic rules that define how PI is defined in Teradata?
Answer: These are the following rules how Primary Index defined in Teradata
a. Only one Primary Index per table.
b.Its a physical mechanism which assigns Row to AMP.
c.A Primary Index value can be unique or non-unique.
d.A Primary Index can be composite till 64 columns.
e.The Primary Index of a populated table cannot be modified.
Q.What are the basic criteria to select Primary Index column for a given table?

Answer: A thumb rule of ADV demographics is followed.
Access Demographics
Identify index candidates that maximize one-AMP operations.
Columns most frequently used for access (Value and Join).
Distribution Demographics
Identify index candidates that optimize parallel processing.
Columns that provide good distribution.
Volatility Demographics
Identify index candidates with low maintenance I/O.
Q.Can you explain and PE and AMPs communicate?
Answer: When user connects to teradata database he opened a session with parsing Engine(PE)
there after when user submits a query,
1. First PE takes query, checks syntax, verifies his access rigthts
2. If every thing looks okay, PE prepare an action plan for AMP.
Which AMP to respond , what is row ID to read …etc
3. Then PE sends action plan to AMP via BYNET signals.
4. Then corresponding AMP takes action plan and reads data and send back to PE
Then PE sends data to USER
Q.Does Permanent Journals and Secondary indexes require Perm Space?
Answer: Yes
Q.Which objects require Perm space in teradata?
Answer: Tables and Stored Procedures objects require Perm Space
Views, Macros , Triggers doesn’t require Perm space.
Q.What is LOG TABLE?

Answer: A log table maintains record of all checkpoints related to the load job, it is essential/mandatory to specify a log table in a job. This table will be useful in case you have a job abort or restart due to any reason.
Q.what is the use of Partition?
Answer: If you create PPI on table, then data at AMPs is ordered using Partition column.
Example
If we have Partition on deptno then all dept 10 records at one place at AMP and dept 20 records at one place.
Q.Is it possible to alter NO RANGE and Unknown partition on a table?
Answer: Yes if table is empty,we can alter NO RANGE and Unknown Partition of a table.
Q.Can you apply a read lock on table where write lock is already applied? 
Answer: No
Q.How teradata makes sure that there are no duplicate rows being inserted when its a SET table?
Answer: Teradata will redirect the new inserted row as per its PI to the target AMP (on the basis of its row hash value), and if it find same row hash value in that AMP (hash synonyms) then it start comparing the whole row, and find out if duplicate.
If its a duplicate it silently skips it without throwing any error.
Q.Explain What are Low and High Confidentiality in Explain command?
Answer: The explain generally displays the plan which would be prepared by Optimizer.
Confidence levels indicate how well the optimizer is aware about the demographic data for a particular step.
High confidence: Means the optimizer know about the no of rows that would be returned as a result of that step. Examples would be when PI statistics exist when the column or range stats exist or no join involved.
Low confidence: Some stats available. Join and stats available on both sides of the join.
No confidence: No stats available. Join involved.
Q.Why Fastload Utility does not support multiset table and duplication?
Answer:  Multi set table supports duplicate rows.
Fastload utility does not support duplicate rows.
Restart logic is one of the reason. Fastload is one of the utility which loads data in blocks(64K). During restart Fastload sends some of the rows for the second time which occur after a checkpoint. Due to this Fastload rejects duplicates.
example
consider 20 rows to be loaded in a table. FL utility is used to load the table and ckpt is 5 rows. If restart occurs after 7th row FL may send 6 and 7th to AMPs during the restart. This records would be considered as duplicates and rejected.
Q.Can you Recover the password of a User in Teradata?
Answer: No , you cant recover the password of a User in Teradata. Passwords are stored in Data Dictionary table (DBC.DBASE).Using a one way encryption method.
You can view the encrypted passwords using the following query.
SEL * FROM DBC.DBASE;
Q.what is the differnce between Sub Query and Corelated Sub Query?
Answer:  Sub Query
If Queries written in a nested manner then its termed as a sub query.
Inner query executed First and executed Only one time.
Corelated Sub Query
Co-Related Sub query get executed once for each row of the parent query.
Inner Query executed many based on Outer query.
Q.what is FILLER command in Teradata?
Answer: while running Fastload or Multiload if you dont want to load a particular field from the datafile to the target table then use the FILLER command to achieve.
Q.Difference between Access Logging and Query Logging in Teradata?
Answer: 1.Access Logging is concerned with security ( who is doing what)
In access logging you ask the database to log who is doing what on a given object.
The information stored is based on the object not the SQL fired or the user who fired it.
2.Query Logging (DBQL) is used for debugging (what is happening around)
Incase of DBQL database keep on tracking various parameters like SQLs, Resource, Spool Usage and other things which help you to understand whats going on, the information is fruitful to debug a problem.
Q.What is basic Teradata Query language?
Answer: 1.It allows us to write SQL statements along with BTEQ commands.
we can use BTEQ for Importing,Exporting and Reporting Purposes.
2.The commands start with a (.) dot can be terminated by using (;) it is not mandatory to use(;)
3.BTEQ will assume any thing written with out dot as a SQL statement and requeries a (;)
to terminate it.
Q.How can you track Login parameters of users in teradata?
Answer: You can view all these parameters in this Data Dictionary Table DBC.LOGONOFF
//SELECT LOGDATE,LOGTIME,USERNAME,EVENT FROM DBC.LOGONOFF;//
Q.How can you use HASH FUNCTIONS to View Data Distribution across all AMPs in Teradata?

Answer: Hash Functions can be used to view the data distribution of rows for a chosen Primary Index.
SELECT HASHAMP(HASHBUCKET(HASHROW())) AS “AMP#”,COUNT(*)
FROM GROUP BY 1 ORDER BY 2 DESC;
HASHROW — returns the row hash value for a given value.
HASHBUCKET — the grouping of a specific Hash value.
HASHAMP—–the AMP that is associated with the Hash Bucket.
This is realy good, by looking into the result set of above written query you can easily
find out the Data distribution across all AMPs in your system and further you can easily identify un-even data distribution.
Q.How do you transfer large Amount of Data in Teradata?
Answer: Transfering of large Amount of data can be done using various Applications like Teradata Utilities.
BTEQ,FastLoad,MultiLoad,Tpump and FastExport
BTEQ (Basic Teradata Query) supports all 4 DML s : SELECT, INSERT,UPDATE and DELETE.
BTEQ also support IMPORT/EXPORT protocols.
Fastload ,Multiload and Tpump transfer the data from Host to Teradata.
FastExport is used to export data from Teradata to the Host.
Q.How can you determine I/O and CPU usage at a user level in Teradata?
Answer: You can find out I/O and CPU Usage from this data Dictionary Table
DBC.AMPUSAGE;
SELECT ACCOUNTNAME,USERNAME,SUM(CPUTIME) AS CPU,SUM (DISKIO)
AS DISKIO FROM DBC.AMPUSAGE GROUP BY 1,2 ORDER BY 3 DESC;
Q.What is Normalization?
Answer:  Normalization is the process of reducing a complex data structure into a simple, stable one.
Generally this process involves removing redundant attributes, keys, and relationships from the conceptual data model.
Q.How many types of Indexes are present in teradata?

Answer: There are 5 different indexes present in Teradata

  1. Primary Index

a.Unique primary index

  1. Non Unique primary index
  2. Secondary Index
  3. Unique Secondary index
  4. non Unique Secondary index
  5. Partitioned Primary Index
  6. Case partition
  7. Range partition
  8. Join index
  9. Single table join index
  10. multiple table join index
  11. Sparse Join index ( constraint applied on join index in where clause)
  12. Hash index

Q.what teradata supports 6A’s means?

Answer:  Active Load
Active Access
Active Events
Active Workload Management
Active Enterprise Integration
Active Availability.

Q.Which is Faster – MultiLoad Delete or Normal Delete command?

Answer: MultiLoad delete is faster then normal Delete command, since the deletion happens in data blocks of 64Kbytes, where as delete command deletes data row by row.

Transient journal maintains entries only for Delete command since Teradata utilities doesn’t support Transient journal loading.

Q.what tools can be used for Active Load in teradata?

Answer: ETL Tools can use queue tables and triggers, and use FastLoad, MultiLoad and TPump utilities .

Q.How to Skip or Get first and Last Record from Flat File through MultiLoad?

Answer: In .IMPORT command in Mload we have a option to give record no. from which processing should begin.
i.e. ‘FROM m’ ‘m’ is a logical record number, as an integer, of the record in the identified data source where processing is to begin.
You can mention ’m’ as 2 and processing will start from second record.

THRU k and FOR n are two options in the same Mload command, functions same towards the end of the processing.

Adding to the above, if from n”start record” and for n “stop record” are not mentioned, mload considers records from start till the end of the file.

Q.what is the use of TEMP space?

Answer: TEMPORARY (TEMP) space :
A database may or may not have TEMP space, however, it is required if Global Temporary Tables are used.