Latest News
Showing posts with label python. Show all posts
Showing posts with label python. Show all posts

Wednesday, August 30, 2023


Question: 

Lanternfish

You are in presence of specific species of lanternfish. They have one special attribute, each lanternfish creates a new lanternfish once every 7 days.

However, this process isn’t necessarily synchronized between every lanternfish - one lanternfish might have 2 days left until it creates another lanternfish, while another might have 4. So, you can model each fish as a single number that represents the number of days until it creates a new lanternfish.

Furthermore, you reason, a new lanternfish would surely need slightly longer before it’s capable of producing more lanternfish: two more days for its first cycle.

So, suppose you have a lanternfish with an internal timer value of 3:

After one day, its internal timer would become 2.

After another day, its internal timer would become 1.

After another day, its internal timer would become 0.

After another day, its internal timer would reset to 6, and it would create a new lanternfish with an internal timer of 8.

After another day, the first lanternfish would have an internal timer of 5, and the second lanternfish would have an internal timer of 7.

A lanternfish that creates a new fish resets its timer to 6, not 7 (because 0 is included as a valid timer value). The new lanternfish starts with an internal timer of 8 and does not start counting down until the next day.

For example, suppose you were given the following list:

3,4,3,1,2

This list means that the first fish has an internal timer of 3, the second fish has an internal timer of 4, and so on until the fifth fish, which has an internal timer of 2. Simulating these fish over several days would proceed as follows:

Initial state: 3,4,3,1,2

After 1 day: 2,3,2,0,1

After 2 days: 1,2,1,6,0,8

After 3 days: 0,1,0,5,6,7,8

After 4 days: 6,0,6,4,5,6,7,8,8

After 5 days: 5,6,5,3,4,5,6,7,7,8

After 6 days: 4,5,4,2,3,4,5,6,6,7

After 7 days: 3,4,3,1,2,3,4,5,5,6

After 8 days: 2,3,2,0,1,2,3,4,4,5

After 9 days: 1,2,1,6,0,1,2,3,3,4,8

After 10 days: 0,1,0,5,6,0,1,2,2,3,7,8

After 11 days: 6,0,6,4,5,6,0,1,1,2,6,7,8,8,8

After 12 days: 5,6,5,3,4,5,6,0,0,1,5,6,7,7,7,8,8

After 13 days: 4,5,4,2,3,4,5,6,6,0,4,5,6,6,6,7,7,8,8

After 14 days: 3,4,3,1,2,3,4,5,5,6,3,4,5,5,5,6,6,7,7,8

After 15 days: 2,3,2,0,1,2,3,4,4,5,2,3,4,4,4,5,5,6,6,7

After 16 days: 1,2,1,6,0,1,2,3,3,4,1,2,3,3,3,4,4,5,5,6,8

After 17 days: 0,1,0,5,6,0,1,2,2,3,0,1,2,2,2,3,3,4,4,5,7,8

After 18 days: 6,0,6,4,5,6,0,1,1,2,6,0,1,1,1,2,2,3,3,4,6,7,8,8,8,8

Each day, a 0 becomes a 6 and adds a new 8 to the end of the list, while each other number decreases by 1 if it was present at the start of the day.

In this example, after 18 days, there are a total of 26 fish.

Question 1 (easy): How many lanternfish would there be after 80 days?

Question 2 (harder): How many lanternfish would there be after 400 days?

Monday, May 30, 2022

 Map():Map(function,iterable)

Map function takes a function and an iterable as arguments and applies on each element of vector separately.

The returned value from map object can be passed to functions like list(),set() etc to get values from map object.


Filter():The filter function operates on a list and returns a subset of that list after applying filtering rule.

reduce():The reduce function will transform a given list into a single value by applying a given function continuously to all the elements.


You have to import reduce () from functools ,otherwise below error will be thrown saying 'reduce' is not defined.





Saturday, December 18, 2021

 Power Transform: 

Power transformation is used to map data from any distribution to close to Gaussian distribution, as normality of the features is necessary for many modeling scenarios. Also transformation of data is needed in order to stabilize variance and minimize skewness.

For instance, some algorithms perform better or converge faster when features are close to normally distributed.

·         linear and logistic regression

·         nearest neighbors

·         neural networks

·         support vector machines with radial bias kernel functions

·         principal components analysis

·         linear discriminant analysis

Power transformer class canbe accessed from sklearn.preprocessing package. Power Transformer provides two transformations Yeo-Johnson transform and Box-Cox transform.

The Yeo-Johnson transform is: 


The Box-Cox transform is:


Important Points:

Box-Cox can only be applied to positive data only.

Also both transformation is parameterized by  Î»ͅ, which is determined through maximum likelihood estimation.


References:

https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-scaler

Sunday, June 13, 2021

The process of transferring an image's aesthetic style to another is known as neural style transfer.  The algorithm takes three images: an input image, a content picture, and a style picture, and modifies the input to match the content of the content image and the artistic style of the style image.

The Basic Principle behind Neural Style Transfer

The basic idea behind neural style transfer is to establish two distance functions: one to describe how different the content of two images is, Lcontent, and another to characterize the difference in style between the two images, Lstyle.Then, given three images: the desired style image, the desired content picture, and the input picture (initialized with the content image), we strive to alter the input image so that its content distant from the content picture and its style distance from the style image is as small as possible.m takes three images: an input image, a content picture, and a style picture, and modifies the input to match the content of the content image and the artistic style of the style image.

Content Image

Style Image


Importing Packages and Selecting a Device

Below is a list of the packages needed to implement the neural transfer

  • torch, torch.nn, numpy (indispensables packages for neural networks with PyTorch)
  • torch.optim (efficient gradient descents)
  • PIL, PIL.Image, matplotlib.pyplot (load and display images)
  • torchvision.transforms (transform PIL images into tensors)
  • torchvision.models (train or load pre-trained models)
  • copy (to deep copy the models; system package)

General steps to perform style transfer:

  1. Visualize data
  2. Basic Preprocessing/preparing our data
  3. Set up loss functions
  4. Create model
  5. Optimize for loss function

Code:

You can find the complete code for this article in the given URL below.

https://www.kaggle.com/smitasahoo/neural-style-transfer-using-pytorch

💕




Tuesday, September 22, 2020

XGBoost is an open source library that provides high-performance gradient-boosted decision trees implementation. An underlying C++ code base combined with top-sitting Python interface makes the package extremely powerful and easy to implement. Gradient Boosting is method in which new models are equipped to predict prior model residuals (i.e. errors).


Tianqi Chen, one of the co-creators of XGBoost, announced (in 2016) that the innovative system features and algorithmic optimizations in XGBoost have rendered it 10 times faster than most sought after machine learning solutions. A truly amazing technique!

Did you know CERN recognized it as the best approach to classify signals from the Large Hadron Collider.




  • XGBoost is an ensemble learning method. 
  • Ensemble learning is a systematic solution to combine the predictive power of multiple learners.
  • The resultant is a single model which gives the aggregated output from several models.
  • The models that form the ensemble, also known as base learners, could be either from the same learning algorithm or different learning algorithms. 
  • Bagging and boosting are two widely used ensemble learners
  • Though these two techniques can be used with several statistical models, the most predominant usage has been with decision trees.

Bagging:

Bootstrap aggregating, also called bagging, is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. 
It reduces variance and helps to avoid overfitting.

Boosting:

  • In boosting, the trees are built sequentially such that each subsequent tree aims to reduce the errors of the previous tree. 
  • Each tree learns from its predecessors and updates the residual errors.
  • Hence, the tree that grows next in the sequence will learn from an updated version of the residuals.
  • The base learners in boosting are weak learners in which the bias is high, and the predictive power is just a tad better than random guessing. 
  • Each of these weak learners contributes some vital information for prediction, enabling the boosting technique to produce a strong learner by effectively combining these weak learners.
  • The final strong learner brings down both the bias and the variance.






Monday, May 27, 2019

Python pickle module is used for serializing and de-serializing a Python object structure.Any object in Python can be pickled so that it can be saved on disk. 


  • It  serializes the object first before writing it to file. 
  • It is a way to convert a python object (list, dict, etc.) into a character stream. 
  • It helps to reconstruct the object in another python script.

import pickle
  
def storeData():
    # initializing data to be stored in db
    Omkar = {'key' : 'Omega', 'name' : 'Hello Omega',
    'age' : 99, 'pay' : 40000}
    Jagdish = {'key' : 'Alfa', 'name' : 'Hello Alfa',
    'age' : 55, 'pay' : 50000}
  
    # database
    db = {}
    db['Omega'] = Omega
    db['Alfa'] = Alfa
      
    # Its important to use binary mode
    dbfile = open('examplePickle', 'ab')
      
    # source, destination
    pickle.dump(db, dbfile)                     
    dbfile.close()
  
def loadData():
    # for reading also binary mode is important
    dbfile = open('examplePickle', 'rb')     
    db = pickle.load(dbfile)
    for keys in db:
        print(keys, '=>', db[keys])
    dbfile.close()
  
if __name__ == '__main__':
    storeData()
    loadData()