Featured post

Functionality of Segment Value Inheritance ESS Process

  The Segment Value Inheritance process simplifies the maintenance of the chart of accounts. When the characteristics of values in the value...

Tuesday 21 May 2019

Mapping between GL and XLA tables



Mapping between GL and XLA tables


Assumption is
GL Entry Summarization: Summarize (By GL Date or Period)
JLT – Transfer to GL: Detail
Expected behavior is: Summarize Entries at GL_JE_HEADERS and detail at GL_JE_LINES
Refer to following post as how this is determine
  1. Relation between GL_JE_BATCHES and XLA_AE_HEADERS
A.1) Group_id column subledger(XLA_AE_HEADER) mapped group_id
  1. Gl_je_batches.group_id => xla_ae_headers.group_id
  2. 1 group id will have many xla_ae_headers rows
  3. 1 group id will have 1 or many periods in xla_ae_headers
  4. 1 group id will have 1 or many ledger id in xla_ae_headers
  5. Ideally there shouldn’t be any rows in xla_ae_headers with null group_id
  6. 1 group id and 1 period(GL Period) in xla_ae_headers will make 1 batch in GL_JE_BATCH
  7. Since 1 group id in xla ae headers can have multiple periods so 1 group id in xla_ae_headers can have multiple batches in gl_je_batches
A.2) Group_id column in GL_JE_BATCHES that does NOT mapped to subledger (XLA_AE_HEADER) group_id
  1. 1 group id will have multiple GL batches or je_batch_id
  2. Ideally 1 group id and 1 period will make 1 journal batch but since mostly these are manual journal batches created through spreadsheet tool (Web ADI) here 1 group id, 1 period and 1 batch name in spreadsheet uploader will make one batch in GL_JE_BATCHES.
  3. Group_id in GL_JE_BATCHES could be null for manually created batched through front end, please not these manually created journals are different from spreadsheet uploaded manual journals.
  1. Relation between GL_JE_HEADERS with XLA_AE_HEADERS and XLA_AE_LINES
  1. Here XLA_AE_HEADERS and XLA_AE_LINES will together create GL_JE_HEADER record.
  2. XLA_AE_HEADERS and XLA_AE_LINES will be summarize as follows to create journal header or 1 row in GL_JE_HEADER
Source => either from XLA_AE_HEADERS or XLA_AE_LINES
Category=> Category from XLA_AE_HEADERS (Please note AP and GL category are not same)
Ledger=> either from XLA_AE_HEADERS or XLA_AE_LINES
Period=> Period from XLA_AE_HEADERS
Currency=> Currency_Code from XLA_AE_LINES
  1. Relation between GL_JE_LINES with XLA_AE_HEADERS and XLA_AE_LINES
  1. Here XLA_AE_HEADERS and XLA_AE_LINES will together create lines in GL_JE_LINES table.
  2. XLA_AE_HEADERS and XLA_AE_LINES will be summarize as follows to create journal line or 1 row in GL_JE_LINES
Ledger=> either from XLA_AE_HEADERS or XLA_AE_LINES
Period=> Period from XLA_AE_HEADERS
GL Code => code_combination_id from XLA_AE_LINES
Entered_Dr=> Entered_Dr from XLA_AE_LINES
Entered_Cr=> Entered_Cr from XLA_AE_LINES
Accounted_Dr=> Entered_Dr from XLA_AE_LINES
Accounted_Cr=> Entered_Cr from XLA_AE_LINES
Description=> may match to description from XLA_AE_LINES if it is detailed je line type, if it is summarize then it may not match.
  1. The Link between GL to SLA and reference columns in GL_IMPORT_REFERENCES
Please refer to following post for these details.

GL Flexfield Qualifiers in R12

GL Flexfield Qualifiers in R12

  • Cost Center: Usually assigned to Department Segment. This qualifying segment is used by Oracle Assets.
  • Natural Account: This qualifier when attached to a segment enables the five nature of accounts Asset, Liability, Expense,Revenue, Owner’s Equity. It also assigns the option of Reconciliation account and Third Party Control.
  • Balancing: This qualifier is usually assigned to the segment on which the Balance Sheet is required. System balances the Debit and Credit of the ledger on this segment.
  • Intercompany: This qualifier is used to identify the transaction between two entities under one group. More on this in Intercompany.
  • Management: This qualifier is used as an additional control on Data Access Set.
  • Secondary Tracking: To be discussed
Flexfield Qualifiers determines the nature of the segment. If you define a COA structure as
Company-Department-Account
How will the system knows which segment is natural account? Which segment should be used to balance the accounting entries? which segment will be used as Cost Centre in Fixed Assets?
It is the qualifier through which the system assign particular attribute to the segment. If I attach Natural Account qualifier to Company Segment, the system will not validate anything against this assignment. It will simply enable the five natures of accounts and treat the Company segment as Natural Account.
So for particular segment, a particular qualifier is assigned. This is the one time activity and freezes as we freeze the COA Structure.

Where Is The Invoice Validation Status Stored? How Is The Invoice Validation Status Determined?

Requirement:

How does the Invoices Workbench (APXINWKB) determine the validation status of an invoice?
The validation status of an invoice is displayed at invoice header level, but this information is not stored in the corresponding table AP_INVOICES_ALL like the rest of data at invoice header level.
How does the system store this information?
Solution
There is no column in the AP_INVOICES_ALL table that stores the validation status.
Invoice Distributions are validated individually and the status is stored at the Invoice Distributions level.
This status is stored in  the match_status_flag column of the ap_invoice_distributions_all table (AP_INVOICE_DISTRIBUTIONS_ALL.MATCH_STATUS_FLAG).
Valid values for this column are:
A – Validated (it used to be called Approved)
N or null – Never validated
T – Tested but not validated
S – Stopped
The invoice header form derives the invoice validation status based on the following:
Validated:
  • If ALL of the invoice distributions have a MATCH_STATUS_FLAG = ‘A’
  • If MATCH_STATUS_FLAG is ‘T’ on ALL the distributions and org has no encumbrance enabled then Invoice would show Validated (provided there is no Unreleased Hold)
Never Validated:
  • If all of the invoice distributions have a MATCH_STATUS_FLAG = null or ‘N’
Needs Revalidation:
  • If any of the invoice distributions have a MATCH_STATUS_FLAG = ‘T’ and the org has Encumbrance enabled.
  • If the invoice distributions have MATCH_STATUS_FLAG values = ‘N’, null and ‘A’ (mixed).
  • If the invoice distributions have MATCH_STATUS_FLAG value = ‘S’ (stopped).
  • If there are any rows in AP_HOLDS that do not have a release code.
  • In R12 only, if an invoice line has GENERATE_DISTS set to Y but there are no distributions created yet.  (Distributions displayed in “preview” mode do not exist yet.)
MATCH_STATUS_FLAG would remain ‘T’ if the invoice has a hold which does not allow Accounting.  In such a scenario you cannot account the invoice because event status will be ‘I’.
As soon as the Hold is released from Holds Tab/Invoice Workbench, the event status is set to ‘U’. Invoice is shown as Validated and accounting is allowed. Match_Status_Flag still remains ‘T’.
NOTE:   Encumbrances are turned on in the Financial Options form.  Table: FINANCIALS_SYSTEM_PARAMS_ALL.
Invoice Distributions have Encumbrance flags and IDs.

Reference: Doc ID 301806.1

Data Conversion vs. Data Migration: What’s The Difference?

In the past I have worked as Migration lead and now I am working as conversion lead for one of the ERP implementation project. As having worked on both(migration/conversion) this is an effort to put forward the difference between migration and conversion.

The terms Data Conversion and data migration are still sometimes used interchangeably on the internet. However, they do mean different things. Data conversion is thetransformation of data from one format to another. It implies extracting data from the source, transforming it and loading the data to the target system based on a set of requirements.
Data Migration is the process of transferring data between silos, formats, or systems. Therefore, data conversion is only the first step in this complicated process. Except for data conversion, data migration includes data profiling, data cleansing, data validation, and the ongoing data quality assurance process in the target system.
Both terms are used as synonymous by many internet resources. I think the reason for that might be that there are very few situations when a company has to convert the data without migrating it.
What is the need of Migration/Conversion?
Migration/Conversion are required when we are upgrading to one version to another (e.g. Oracle Apps R11 to Oracle Apps R12) or moving data from some legacy system to Oracle Apps. There will be bulk  of data (sometimes millions or even more than that) that needs to be moved from one system to another  and  before moving the data it should be validated and only valid records should be entered into Oracle Apps.
If both the systems (Target and source) are not having same structure for data (Tables are not same/Table Structure is not same/The data is being stored in database is not same), it needs to be translated (e.g. upgrading from Oracle 11i to R12 where table structures are not same) then we say it as conversion (any kind of translation of data on Source data to make it suitable for Target system) otherwise migration. Reimplementation of same Oracle apps version is good example of Data Migration.

What is Conversion?
Conversion of data means translating the data to suite target system (data should be formatted according to target system) and then move the translated data using Interface Programs/APIs.
• Identify the data to be imported to new system (Business requirement).
• Extract into flat file/Staging table
• Translate/Convert/Format the data
• Load the data into Interface Table (using SQL* Loader/DB Link/Others) after validation (If loading the data using Interface) and then launch standard Interface concurrent program to load the data to Oracle Apps Base Tables
• If using API, fetch the data, validate it and then call API to import the data

Conversion can be complex because you need to have a complete understanding of the source you’re converting 
from, and then format you’re converting to. If you don’t, you run the risk of compromising your data and ruining its integrity during the conversion process.
Example: Conversion of data like suppliers, customers from Oracle ERP to Workday ERP.
What is Migration?
Migration of data means moving the data from one system to another using Interface Programs/APIs where both the systems have same structure of data.
Process of Migrating of data:
• Identify the data to be imported to new system (Business requirement).
• Extract the data into flat file/Staging table
• Load the data into Interface Table (using SQL* Loader/DB Link/Others) after validation (If loading the data using Interface)
The migration process is very detailed and can take months to complete. Here is a general process a typical data migration would follow.
  1. Database Review: Review of the database and a review of the current implementation is conducted.
  2. Data Mapping: Thoroughly review the tables and data in the database to find unique tables and columns, as well as potential data discrepancies or inconsistencies. It’s important to track which tables the data currently resides in and where the data will be migrated to using data mapping.
  3. Migration: Once the mapping is complete, the migrated data can be transferred from the original database to the new database that can be tested and put into production. 
Where data conversion deals with individual data fields, data migration is the process of moving and merging entire databases or programs. 
For example clients migrates data from multiple databases into a single database, or migrates data from one CRM/ERP system to another.
Data migration may appear simple at first glance, but just like data conversion, it’s actually pretty complex! It usually involves what’s called data mapping, a strategic process that maps data from one database to another by looking at where the data lives currently, and where it will live once it’s moved.
Before migrating data, you need to be sure that the data fields in your old database exist in the new database. You also have to understand how both programs handle different types of file formats, and whether or not any data fields will be merged during the process.
How conversion/Migration and interface differ?
There are good numbers of parameter on which they can be categorized. Take few of them:

Frequency 
• Conversions/Migration are a one-time event
• interfaces are ongoing
Occurrence in the project timeline
• conversions/Migration executed before production
• interfaces executed during production
Manner of execution• Conversions/Migration are batch
• Interfaces may be batch or real time
Complexity
• Conversion/Migration does have very complex, it’s totally depends upon the data mapping activity.
• Coordinating with other systems make interfaces more complex
 
Maintenance
• Maintenance of interface is bit cost intensive task.

Accumulated depreciation or YTD depreciation projection as of specific period end.


Oracle by default does not maintain future depreciation information (Accumulated or YTD) as depreciation could change as per cost adjustment, addition or asset retirement.


Often there will be requirement to write a function or package that returns YTD depreciation or Accumulated depreciation as of year-end or specific period(In this example it is Dec 2017) for example Accumulated depreciation as of Dec 2017, YTD depreciation as of Dec 2017 and remaining depreciation period for asset after Dec 2017.
Over here I had requirement to get Asset depreciation as of Dec 2017 for Workday Financial Conversion.
Following is the function that returns depreciation as of certain period.
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
CREATE OR REPLACE FUNCTION APPS.xxcb_get_asset_accum_deprn(
    v_asset_num NUMBER, v_deprn_type varchar2, l_future_period number)
  RETURN VARCHAR2
AS
  --
  l_rem_deprn_period number;
  l_asset_id number;
  l_current_deprn_period number;
  l_no_of_deprn_period number; -- to be calculated
  l_total_depreciation number;
  l_monthly_depreciation number;
  l_ytd_depreciation number;
  l_accum_depreciation number; 
  l_total_sum_depreciation number;
  l_asset_cost number;
  --
BEGIN
--
--Get asset id
select asset_id
into l_asset_id
from fa_additions_b 
where asset_number = v_asset_num;


begin
--1) Get remaining period of the asset
--2) Get Current Period of Asset Depeciation
--3) Get asset depreciation of current period or as of Jun-06
--4) Get accumulated depreciation as of current period
--5) Get YTD Depreciation of Asset 
--6) Get Asset Cost
select
DECODE (books.period_counter_fully_retired,NULL, GREATEST( NVL (books.life_in_months, 0)- round(MONTHS_BETWEEN (nvl(fdp.PERIOD_CLOSE_DATE,fdp.PERIOD_OPEN_DATE),books.prorate_date)),0),0) rem_deprn_period,
substr(fdp.period_name,1,2) Current_Period_Of_Asset,
decode(gl_code.segment1, 10,dep_sum.SYSTEM_DEPRN_AMOUNT,(select dep_mc_sum.SYSTEM_DEPRN_AMOUNT from fa_mc_deprn_summary dep_mc_sum where dep_mc_sum.asset_id = fab.asset_id and dep_mc_sum.set_of_books_id = 2025 and dep_mc_sum.period_counter = dep_sum.period_counter)) deprn_current_period,
decode(gl_code.segment1, 10,dep_sum.deprn_reserve,(select dep_mc_sum.deprn_reserve from fa_mc_deprn_summary dep_mc_sum where dep_mc_sum.asset_id = fab.asset_id and dep_mc_sum.set_of_books_id = 2025 and dep_mc_sum.period_counter = dep_sum.period_counter)) accum_deprn_current_period,
nvl(decode(gl_code.segment1, 10,(select dep_sum_ytd.ytd_deprn from fa_deprn_summary dep_sum_ytd where dep_sum_ytd.asset_id = fab.asset_id and dep_sum_ytd.deprn_run_date = (select max(deprn_run_date) from fa_deprn_summary a,fa_deprn_periods b where a.asset_id = fab.asset_id and a.period_counter = b.period_counter and b.fiscal_year = '2017')),(select dep_mc_sum.ytd_deprn from fa_mc_deprn_summary dep_mc_sum where dep_mc_sum.asset_id = fab.asset_id and dep_mc_sum.set_of_books_id = 2025 and dep_mc_sum.deprn_run_date = (select max(deprn_run_date) from fa_mc_deprn_summary a,fa_deprn_periods b where a.asset_id = fab.asset_id and a.set_of_books_id = 2025 and a.period_counter = b.period_counter and b.fiscal_year = '2017'))),0) Year_To_Date_Depreciation,
decode( gl_code.segment1,10,books.cost,(select mc_book.cost from fa_mc_books mc_book where mc_book.asset_id = fab.asset_id and mc_book.date_ineffective is null and mc_book.transaction_header_id_out  is null and set_of_books_id = 2025)) Acquisition_Cost
into
l_rem_deprn_period,
l_current_deprn_period,
l_monthly_depreciation,
l_accum_depreciation,
l_ytd_depreciation,
l_asset_cost
from fa_additions_b fab,
fa_additions_tl fat,
fa_distribution_history fdh,
gl_code_combinations_kfv gl_code,
fa_books books,
fa_categories_b fcb,
fa_category_book_defaults fcbd,
fa_deprn_summary dep_sum,
fa_deprn_periods fdp,
fa_locations loc 
where fab.asset_id = fdh.asset_id
and fdh.code_combination_id = gl_code.code_combination_id
and fab.asset_id = fat.asset_id
and fab.asset_id = books.asset_id
and books.date_ineffective is null 
and books.transaction_header_id_out  is null
and books.book_type_code = 'US_CB CORP'
and fab.asset_category_id = fcb.category_id
and fcbd.category_id = fcb.category_id
and fcbd.book_type_code = 'US_CB CORP'
and fcbd.end_dpis is null
and books.asset_id = dep_sum.asset_id
and dep_sum.period_counter=fdp.period_counter
--and fdp.period_name = '06-2017'
and dep_sum.deprn_run_date = (select max(deprn_run_date) from fa_deprn_summary where asset_id = fab.asset_id and book_type_code = 'US_CB CORP')
and fdp.book_type_code = 'US_CB CORP'
and fdh.location_id = loc.location_id
and fdh.date_ineffective is null
and fab.asset_type = 'CAPITALIZED'
and fab.asset_id=l_asset_id;
--
end;

-- No of period depreciation to be calculated
l_no_of_deprn_period:= l_future_period -l_current_deprn_period;


-- Calculate depreciation amount from current period to end of Dec-2017
if (l_rem_deprn_period < l_no_of_deprn_period) then
--
l_total_depreciation := l_monthly_depreciation * l_rem_deprn_period;
--
else
--
l_total_depreciation := l_monthly_depreciation * l_no_of_deprn_period;
--
end if; 
--
if v_deprn_type = 'ACCUM' then 
-- Calculate total accumulated depreciation until Dec-2017
l_total_sum_depreciation := l_accum_depreciation + l_total_depreciation;
--
if (l_total_sum_depreciation > l_asset_cost) then
--
l_total_sum_depreciation := l_asset_cost;
--
end if; 
--

elsif v_deprn_type = 'YTD' then
-- Calculate total YTD depreciation until Dec-2017
l_total_sum_depreciation := l_ytd_depreciation + l_total_depreciation;
--
end if;
--
RETURN l_total_sum_depreciation;
--
END;
/
 
Three parameters that this function accepts are
  • Asset Number: Asset Number of the asset for which we need to calculate YTD and accumulated depreciation.
  • Depreciation Type: Single function is being used to get YTD as well as accumulated depreciation. Two possible values for this parameter are “YTD” and “ACCUM”
  • Future Period: We need to pass future period number. For example 12 for Dec
Following is the code that returns remaining period post 12-2017.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
-- Query to get remaining period as of 31-Dec-2017 
select 
greatest(DECODE (books.period_counter_fully_retired,NULL, GREATEST( NVL (books.life_in_months, 0)- round(MONTHS_BETWEEN (nvl(fdp.PERIOD_CLOSE_DATE,fdp.PERIOD_OPEN_DATE),books.prorate_date)),0),0)-(12-(substr(fdp.period_name,1,2))),0) result
from fa_additions_b fab,
fa_additions_tl fat,
fa_distribution_history fdh,
gl_code_combinations_kfv gl_code,
fa_books books,
fa_categories_b fcb,
fa_category_book_defaults fcbd,
fa_deprn_summary dep_sum,
fa_deprn_periods fdp,
fa_locations loc 
where fab.asset_id = fdh.asset_id
and fdh.code_combination_id = gl_code.code_combination_id
and fab.asset_id = fat.asset_id
and fab.asset_id = books.asset_id
and books.date_ineffective is null 
and books.transaction_header_id_out  is null
and books.book_type_code = 'US_CB CORP'
and fab.asset_category_id = fcb.category_id
and fcbd.category_id = fcb.category_id
and fcbd.book_type_code = 'US_CB CORP'
and fcbd.end_dpis is null
and books.asset_id = dep_sum.asset_id
and dep_sum.period_counter=fdp.period_counter
--and fdp.period_name = '06-2017'
and dep_sum.deprn_run_date = (select max(deprn_run_date) from fa_deprn_summary where asset_id = fab.asset_id and book_type_code = 'US_CB CORP')
and fdp.book_type_code = 'US_CB CORP'
and fdh.location_id = loc.location_id
and fdh.date_ineffective is null
and fab.asset_type = 'CAPITALIZED'
and fab.asset_id=2518;
Following are sample script for testing this function
1
2
3
4
select xxcb_get_asset_accum_deprn(1492,'ACCUM',12) ACCUM_deprn_as_of_Dec_2017 from dual;
--1492, 1729

select xxcb_get_asset_accum_deprn(1492,'YTD',12) YTD_deprn_as_of_Dec_2017 from dual;