Latest News

Latest

Data Warehouse Concepts

OBIEE Errors

What's New

OBIEE Performance Tips

Sponsor

Big Data

Natural Language Processing

Machine Learning

Latest News

Wednesday, September 23, 2020

 What is ESS Job?

ESS jobs in fusion apps is the same like Concurrent Program in Oracle apps r12. As we uses concurrent Program to run the Reports , Procedures and Scripts in Oracle apps in the same way we uses ESS jobs in fusion apps to run the BIP reports , Procedures and Scripts. We register Reports , scripts in fusion apps as a ESS jobs and user runs these ESS jobs as a Scheduled Processes in Oracle Fusion same like Concurrent Requests in Oracle apps R12.

We creates parameters in Concurrent Job and then attach value set in these parameters to provide lost of value selection to the users during runtime But in ESS jobs in fusion apps, We have parameters in Oracle Fusion and we can also create Parameters in Oracle Fusion ESS jobs but in Fusion , We Don't attach Value set to ESS jobs parameter but We attach List of Value to Parameters. In Oracle Fusion , List of Value and Value set are two different -2 Terms and Objects. But in Oracle Fusion , We cannot Create custom List of Value (Table Type) to ESS jobs parameters. In Fusion Oracle has provided , standard VO (View Object) and on the basis of View Object(VO) , we create List of Values and attach List of List of Value to Parameters.


Steps To Create ESS Job In Oracle Fusion

1.Navigate to Setup and Maintenance


2.Click on Search . Search Manage Enterprise %



 


3.Click on Manage Enterprise Job Definitions and Job sets for Finanacial,Supply Chain and Related Applications.


4.Click on "+" to create to Navigate to ESS job Page


ESS JOB Page

5.Enter Display Name, Name, path, Job application name. Select Job Type as BIPJobType. Provide report ID. To get the report ID, go to BI Publisher and place the report under /Shared Folders/ folder. You can keep it in any folder or it’s sub folders. Copy the part after /Shared Folders/ till .xdo
Example:
Report ID:/Custom/Financials/Payables/Test Repprt.xdo




6.Under parameters section click on create button, and enter parameters. Save and close the popup. Save and close the ESS Job page as well.


Note: If you have more than one parameter in BIP report then ,Parameter order in the ESS job should be same as the BIP Report.

Example : 


Order of Parameters in ESS job


7.Save and close the popup. Save and close the ESS Job page as well.
8.Now you can run the job. Navigate to Scheduled Processes.


9.Click on Scheduled Process and then new scheduled Process.
10.Select the job just created, enter input parameters and submit the job.

Provide Parameters


Submit the job.


Click on refresh to see the job status. When the status is succeeded click on that to see output file ,log file and XML file.




Tuesday, September 22, 2020

Compression helps to minimise the size of a file by eliminating unnecessary data from the file. By making files smaller, less storage space is used, and more files can be saved on the storage. For example, a text file of 100 KB can be compressed to 52 KB by eliminating extra spaces or replacing long character strings with short representations. 





When the file is read, an algorithm recreates the original data. Image files are typically compressed as well. For example, the JPEG image file format uses compression to remove redundant pixel data.





Advantage:

Almost any file can be compressed, although files with non-duty data can compress little, if any, so that compression ratios are a guideline and not a law. For eg, a compression ratio of 2 to 1 would hopefully allow a file value of 400 GB on a 200 GB disc. 

Drawbacks:

It is difficult to know just how much a file should be compressed before a compression algorithm is used.

XGBoost is an open source library that provides high-performance gradient-boosted decision trees implementation. An underlying C++ code base combined with top-sitting Python interface makes the package extremely powerful and easy to implement. Gradient Boosting is method in which new models are equipped to predict prior model residuals (i.e. errors).


Tianqi Chen, one of the co-creators of XGBoost, announced (in 2016) that the innovative system features and algorithmic optimizations in XGBoost have rendered it 10 times faster than most sought after machine learning solutions. A truly amazing technique!

Did you know CERN recognized it as the best approach to classify signals from the Large Hadron Collider.




  • XGBoost is an ensemble learning method. 
  • Ensemble learning is a systematic solution to combine the predictive power of multiple learners.
  • The resultant is a single model which gives the aggregated output from several models.
  • The models that form the ensemble, also known as base learners, could be either from the same learning algorithm or different learning algorithms. 
  • Bagging and boosting are two widely used ensemble learners
  • Though these two techniques can be used with several statistical models, the most predominant usage has been with decision trees.

Bagging:

Bootstrap aggregating, also called bagging, is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. 
It reduces variance and helps to avoid overfitting.

Boosting:

  • In boosting, the trees are built sequentially such that each subsequent tree aims to reduce the errors of the previous tree. 
  • Each tree learns from its predecessors and updates the residual errors.
  • Hence, the tree that grows next in the sequence will learn from an updated version of the residuals.
  • The base learners in boosting are weak learners in which the bias is high, and the predictive power is just a tad better than random guessing. 
  • Each of these weak learners contributes some vital information for prediction, enabling the boosting technique to produce a strong learner by effectively combining these weak learners.
  • The final strong learner brings down both the bias and the variance.






Thursday, August 27, 2020

 What is a Confusion Matrix?

Confusion Matrix is a technique to summarize the performance of  an algorithm typically a supervised learning one.Most of the time we use confusion Matrix to visualize the performance of Classification algorithm.

Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual class (or vice versa).

It is a special kind of contingency table, with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table)















Four  Basic Evaluation Metrics of confusion Matrix:

Accuracy:

For what fraction of all instances is the classifier;s prediction correct(for either positive or negative c;ass)?


Classification Error: 

For what fraction of all instance is the classification incorrect?

Recall (True positive Rate): 

What fraction of all positive instance does the classifier correctly identify as positive?

Recall is also known as 

  • True Positive Rate
  • Sensitivity
  • Probability of Detection

Precision:

What fraction of positive predictions are correct?


False Positive Rate is also known as Specificity.

Calculation of Evaluation Metrics using Scikit learn:

Import accuracy_score,precision_score,recall_score,f1_score from sklearn.metrics module as given in the screen shot.



Monday, August 24, 2020

ANN(Artificial Neural Network): 

ANN is basically an engineering approach of biological neuron.It takes many inputs but one output.


A hidden layer in an artificial neural network  is a layer in between input layer and output layer,where artificial neurons take in a set of weighted inputs and produce an output through activation function. Activation function is also known as Transfer function. This function may be anything like sigmoid,hyperbolic  or tangent function.

Credit:Google

ANN  is inspired from neural network system of  human body,as human body is the best example of parallel processing.It emulates the structure of human brain :it is composed of a large number of highly interconnected simple processing elements(neurons) working in parallel to learn/solve a specific problem.

What is Neural Network?

Neural Network is just a web of inter connected neurons. With the help of this interconnected neurons all the parallel processing is done in human body.

Neuron sends signal through Axon,axon splits this signals through synapses.








Wednesday, August 19, 2020

What is FBDI?

FBDI is one of the many ways in Oracle Fusion Cloud to do conversions. FBDI stands for File Based Data Import. In cloud environment, FBDI is the best way to get mass conversions done in shortest time. All entities are not provided with FBDI currently in Oracle Cloud. During the Planning phase itself, time and budget calculated based on the availability of FBDI, ADFDI or web services. This is important during the plan and design phase as the write access to Database is restricted for SaaS users (Software as a Service).

Pre-Requisites for FBDI:

1. Machine should have Excel or equivalent software which can execute macros in Excel
2. For Excel, users should make sure the Macros are enabled
3. FBDI for the component should exist in the docs.oracle.com
4. It is better to download the FBDI template which has is of the same version as of the Cloud Environment.

Stages:

1. Downloading the template
2. Preparing data for the template
3. Generating the .zip files
4. Uploading the file to oracle
5. Move data to Interface tables
6. Move data to Base tables

Oracle link:

https://docs.oracle.com/en/cloud/saas/supply-chain-management/20c/oefsc/overview.html#SCM_File_Based_Data_Import_Overview



Credit: Oracle


Here I will  explain the step step by procedure to Importing Work definition in Oracle cloud manufacturing. The same procedure we can apply for other modules like Finance,Projects etc.

1st Step :Download the Template

Oracle URl to download import file:





























2nd Step :Preparation of Data


Open the template and fill  values for all Mandatory  parameters present in the the excel sheet.



















3rd Step :Generate .Zip file


1.Once all the required data is filled. Click on “Generate CSV File” button present on the first worksheet of the template

2.The macro runs and generates a series of csv files zipped into one file.  Name the file with an appropriate name for easy reference.




4th Step: Upload .zip file in Oracle cloud application


1.Login to Oracle cloud application using username and Password.





















2.From the Navigator, click File Import and Export under Tools.

































3.Click on Actions ->Upload from Search Results Area


















4. Click on Upload and Upload the .zip file from your local system
















5.Select Action.It differs from module to module and also from import file type. For work definition import I have selected action scm/WorkDefinition/Import


















6. click on Save & close.



















5th Step:Load data to Interface tables


1.Click on Scheduled process 
















2. click on new Scheduled Process  and search for Load Interface File for Import.


Load Interface File for Import:  This process will transfer data from user defined file to Interface tables.














3. Click on 'OK' .You will able to see 'Process details' window.
































4.Search Import process and select . To import Work definition our Import Process will be 'Import Work Definitions'. Also select the upload Data File.































5. Click on Submit. If you want to specify any submit notes you can specify.




































6.Click on refresh (Highlighted in yellow) to see the status of the scheduled process. The process status will go from ready to wait. 













7.If data will transferred successfully to the Interface table it will show "Succeeded"Status.














8.To check the log file just click on corresponding status. You will able to download the log file corresponding to Process ID.

















6th Step :Load Data from Interface table to Application Table


Verify that the Load Interface File for Import process completed successfully.Submit the product-specific import process.


To Submit product specif import perform below steps.

1. Go to scheduled process and click on "New scheduled Process". Search product specific process. For us it will Import Work Definitions.













2. Click on "OK". You will able to see below Process details window.
































3. For Work Definition Import select batch size  and click on submit.  "submit notes" is optional.



















4.Click on refresh and see the job status.










7th Step: Verify in the Application 


Verify the Work definition creation in the Application by searching Manually. 

1.Go to Supply Chain Execution Tile.Then click on Work Definition.





















2.Click on task . Then under work definition click on Manage work Definition. 

















3.Below page will come after clicking on Manage work definition. Search manually for the created Work definition.

Note: This is for Work definition Import.





Ads Place 970 X 90

Big Data Concepts

Error and Resolutions

Differences