Latest News

Latest

Data Warehouse Concepts

OBIEE Errors

What's New

OBIEE Performance Tips

Sponsor

Big Data

Natural Language Processing

Machine Learning

Latest News

Sunday, June 13, 2021

The process of transferring an image's aesthetic style to another is known as neural style transfer.  The algorithm takes three images: an input image, a content picture, and a style picture, and modifies the input to match the content of the content image and the artistic style of the style image.

The Basic Principle behind Neural Style Transfer

The basic idea behind neural style transfer is to establish two distance functions: one to describe how different the content of two images is, Lcontent, and another to characterize the difference in style between the two images, Lstyle.Then, given three images: the desired style image, the desired content picture, and the input picture (initialized with the content image), we strive to alter the input image so that its content distant from the content picture and its style distance from the style image is as small as possible.m takes three images: an input image, a content picture, and a style picture, and modifies the input to match the content of the content image and the artistic style of the style image.

Content Image

Style Image


Importing Packages and Selecting a Device

Below is a list of the packages needed to implement the neural transfer

  • torch, torch.nn, numpy (indispensables packages for neural networks with PyTorch)
  • torch.optim (efficient gradient descents)
  • PIL, PIL.Image, matplotlib.pyplot (load and display images)
  • torchvision.transforms (transform PIL images into tensors)
  • torchvision.models (train or load pre-trained models)
  • copy (to deep copy the models; system package)

General steps to perform style transfer:

  1. Visualize data
  2. Basic Preprocessing/preparing our data
  3. Set up loss functions
  4. Create model
  5. Optimize for loss function

Code:

You can find the complete code for this article in the given URL below.

https://www.kaggle.com/smitasahoo/neural-style-transfer-using-pytorch

💕




Wednesday, May 26, 2021

 What is List Comprehension?

List comprehensions are a quick and easy way to make lists It  is one of Python's most important features. List comprehensions are used for creating new lists from other iterable, or to create a subsequence of those elements that satisfy a certain condition.

Example:

We want a list that will have all characters of a string . Using List comprehension we can write as below:

We can solve the same problem using the traditional way as below:

Hacker Rank Problem:

Hacker Rank Solution:


💕




 What is Task Orchestration Tool

Cleaning data, training machine learning models, monitoring performance, and deploying the models to a production server are common tasks for smaller teams to begin with. The number of repetitive steps increases as the team and solution expand in size. It becomes much more important that these activities are completed in a timely manner.

The degree to which these activities are interdependent grows as well. You will have a pipeline of activities that need to be run once a week or once a month when you first start out. These tasks must be completed in the correct order. This pipeline evolves into a network of dynamic branches as you expand. In several cases, some tasks trigger the execution of others, which may be dependent on the completion of some other tasks first.

This network can be represented as a DAG (Directed Acyclic Graph), which represents each task and its interdependencies.

Pipeline      Credit: Google Image



DAG   Credit:Google Image



There has been a recent proliferation of new tools for orchestrating task- and data workflows (also known as "MLOps"). Since the sheer number of these tools makes it difficult to determine which to use and how they interact, we decided to pit some of the most common against one another.

Source: Google Image

It is clear that the most common solution is airflow, followed by Luigi. There are also newer candidates, all of whom are rapidly expanding.

Comparison Table

 

Maturity

Popularity

Simplicity

Breadth

Language

Apache Airflow

B

A

C

A

Python

Luigi

B

A

A

B

Python

Argo

C

B

B

B

YAML

Kubeflow

C

B

B

C

Python

MLFlow

C

B

A

C

Python


While each of these techniques has its own set of strengths and weaknesses, none of them can guarantee a pain-free procedure right out of the box. Before you start worrying about the tool to use, make sure you have strong processes in place, such as positive team culture, blame-free retrospectives, and long-term goals.

Friday, May 21, 2021


MLOps is a DevOps extension in which the DevOps principles are applied to machine learning pipelines. Creating a machine learning pipeline differs from creating software, primarily due to the data aspect. The model's quality is determined by more than just the code's quality.

It is also determined by the quality of the data — i.e. the features — used to run the model. According to Airbnb, data scientists spend roughly 60% to 80% of their time creating, training, and testing data. Feature stores allow data scientists to reuse features rather than rebuilding them for each new model, saving valuable time and effort. Feature stores automate this process and can be triggered by Git-pushed code changes or the arrival of new data. This automated feature engineering is a crucial component of the MLOps concept.

ML Ops is the intersection of Machine Learning, DevOps and Data Engineering



Photo by Kevin Ku from Pexels

The process of creating features is known as feature engineering, and it is a complex but essential component of any machine learning process. Better features equal better models, which equals a better business outcome.

To generate a new feature requires enormous work, and building the feature pipeline is only one thing. You probably had a long trial and error process, with a large number of characteristics, to get to the point of being pleased with your unique new feature. Next, the operational pipelines needed to be calculated and stored, which then differs depending on whether or not the features are online or offline.

In addition, every data science project begins with the search for the right functionality. The problem is, that there is mostly no unique, centralized location for searches; there are features everywhere.

The Feature Store is not only a data layer, it also allows users to manipulate raw data and store them as features that are ready for use in any type of Learning Machine Model.



There are two types of features that is online and offline:

Offline Features: Many of the features are calculated offline as part of a batch job. As an example, consider the average monthly spend of a customer. They are mostly used by offline processes. Because these types of computations can take a long time, they are calculated using frameworks such as Spark or by simply running complex SQL queries against a set of databases and then using a batch inference process.

Data preparation pipelines push data into the Feature Store tables and training data repositories.


Online Features: These features are a little more complicated because they must be calculated quickly and are frequently served in milliseconds. Calculating a z-score, for example, for real-time fraud detection. In this case, the pipeline is built in real time by calculating the mean and standard deviation over a sliding window. These calculations are much more difficult, necessitating quick computation as well as quick access to the data. The information can be kept in memory or in a very fast key-value database. The process itself can be carried out on various cloud services or on a platform such as the Iguazio Data Science Platform, which includes all of these components as part of its core offering.

Model training jobs use Feature Store and training data repository data sets to train models and then push them to the model repository.


Advantages of Feature Store:

  • Faster development
  • Smooth model deployment in production
  • Increased model accuracy
  • Better collaboration
  • Track lineage and address regulatory compliance


Friday, January 29, 2021

Oracle Cloud Platform Enterprise Analytics 2020 Specialist: Exam Number: 1Z0-1041-20

Who Should do this Oracle Certification:

An Oracle Cloud Platform Enterprise Analytics 2020 Certified Specialist is responsible for implementing Oracle Analytics Cloud. They have the knowledge required to perform provisioning, build dimensional modelling and create data visualizations. They can use Advanced Analytics capabilities, and create a machine learning model.

 

  • Format: 

    Multiple Choice

  • Duration: 85 Minutes
  • Exam Price: 

    Price: $245 (INR ₹18,538 )

  • Number of Questions: 55


  • Passing Score: 70%

Recommended Trainings:

Oracle Cloud Platform Enterprise Analytics 2020 Certified Specialist 
Oracle Data Management Cloud Services Learning Subscription

Additional Preparation and Information

A combination of Oracle training and hands-on experience (attained via labs and/or field experience) provides the best preparation for passing the exam.


Exam topics


Oracle Analytics Cloud (OAC)

  • Describe the editions of Oracle Analytics Cloud and solutions provided
  • Create an OAC instance

Oracle Analytics Cloud Provisioning and Lifecycle

  • Provision Users and Application Roles
  • Explain how to migrate from OBIEE on prem to Oracle Analytics Cloud

Modelling

  • Explain Transactional System, Analytical Systems, Data Warehousing, Dimension, Facts and Hierarchies
  • Build Types of Dimensional Modeling

Data Visualization

  • Explain OAC 'best visualization' for a data set
  • Describe brushing and its benefits
  • Create a flexible layout of multiple visualizations to present data as a story
  • Use OAC to present your data as a story
  • Create a custom visualization plugin
  • Use Search
  • Describe New visualization Types
  • Use Map Layers
  • Replace data Set and search data sets using BI ask

Data Preparation

  • Describe self service data preparation
  • Perform operations on a data set
  • Describe 'sequence' in the context of OAC data preparation
  • Explain how OAC Data Sync works and when it is implemented
  • Explain the OAC remote data connector change to "Explain the OAC Data Gateway"
  • Use Data Flows to curate a Data Set
  • Create Connection ADW or ATP

Advanced Analytics

  • Describe the Advanced Analytics capabilities in OAC
  • Explain advanced analytic calculations in OAC > "Explain Advanced Calculations inside Expression Editor"
  • Create trendline, and forecast
  • Use Clustering
  • Use Outlier Identification

Machine Learning

  • Use the 'Explain' functionality
  • Create and train a machine learning model, analyze its effectiveness and use it in a project
  • Use an ML scenario in a project

Oracle Analytics Cloud Answers, Dashboards and BI Publisher

  • Use Answers to build an Analysis >> "Use Oracle BI Analysis to build a report"
  • Create pixel perfect reports rename to "Design Pixel Perfect Reports"
  • Manage content in the Catalog
  • Create Prompts
  • Create Dashboards
  • Create Calculations
  • Administration
  • OAC Mobile

Wednesday, September 23, 2020

 What is ESS Job?

ESS jobs in fusion apps is the same like Concurrent Program in Oracle apps r12. As we uses concurrent Program to run the Reports , Procedures and Scripts in Oracle apps in the same way we uses ESS jobs in fusion apps to run the BIP reports , Procedures and Scripts. We register Reports , scripts in fusion apps as a ESS jobs and user runs these ESS jobs as a Scheduled Processes in Oracle Fusion same like Concurrent Requests in Oracle apps R12.

We creates parameters in Concurrent Job and then attach value set in these parameters to provide lost of value selection to the users during runtime But in ESS jobs in fusion apps, We have parameters in Oracle Fusion and we can also create Parameters in Oracle Fusion ESS jobs but in Fusion , We Don't attach Value set to ESS jobs parameter but We attach List of Value to Parameters. In Oracle Fusion , List of Value and Value set are two different -2 Terms and Objects. But in Oracle Fusion , We cannot Create custom List of Value (Table Type) to ESS jobs parameters. In Fusion Oracle has provided , standard VO (View Object) and on the basis of View Object(VO) , we create List of Values and attach List of List of Value to Parameters.


Steps To Create ESS Job In Oracle Fusion

1.Navigate to Setup and Maintenance


2.Click on Search . Search Manage Enterprise %



 


3.Click on Manage Enterprise Job Definitions and Job sets for Finanacial,Supply Chain and Related Applications.


4.Click on "+" to create to Navigate to ESS job Page


ESS JOB Page

5.Enter Display Name, Name, path, Job application name. Select Job Type as BIPJobType. Provide report ID. To get the report ID, go to BI Publisher and place the report under /Shared Folders/ folder. You can keep it in any folder or it’s sub folders. Copy the part after /Shared Folders/ till .xdo
Example:
Report ID:/Custom/Financials/Payables/Test Repprt.xdo




6.Under parameters section click on create button, and enter parameters. Save and close the popup. Save and close the ESS Job page as well.


Note: If you have more than one parameter in BIP report then ,Parameter order in the ESS job should be same as the BIP Report.

Example : 


Order of Parameters in ESS job


7.Save and close the popup. Save and close the ESS Job page as well.
8.Now you can run the job. Navigate to Scheduled Processes.


9.Click on Scheduled Process and then new scheduled Process.
10.Select the job just created, enter input parameters and submit the job.

Provide Parameters


Submit the job.


Click on refresh to see the job status. When the status is succeeded click on that to see output file ,log file and XML file.




Ads Place 970 X 90

Big Data Concepts

Error and Resolutions

Differences