Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf · 2017-09-23 · Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (2024)

Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf· 2017-09-23· Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (1)

Local Ancestry Inference in Admixed PopulationsNaveen Arivazhagan

Department of Computer ScienceStanford University

Stanford, CA 94305, USAEmail: [emailprotected]

Hye Ji KimDepartment of Electrical Engineering

Stanford UniversityStanford, CA 94305, USA

Email: [emailprotected]

Edwin YuanDepartment of Applied Physics

Stanford UniversityStanford, CA 94305, USA

Email: [emailprotected]

I. INTRODUCTION

Contemporary human sub-populations exhibit great differ-ences in the frequency of various alleles, or the set of varia-tions of a particular gene. Advances in genome sequencinghave rapidly improved speed, cost, and accuracy, allowingunprecedented opportunity to map the functionality and lo-cation of such genetic variation. Of particular interest is themapping of disease associated loci in the genome. In admixedpopulations, or populations that are the result of a mixing oftwo or more ancestral populations over a small number ofgenerations, one technique that has been used extensively ismapping by admixture linkage disequilbrium (MALD). Therationale behind MALD is that disease-affected individuals inthe admixed populations should share higher levels of ancestrynear disease-carrying loci with the ancestral population fromwhich the disease loci arose. The accuracy of MALD thusdepends crucially on the accuracy with which one can inferthe ancestry around any loci in an admixed genome. Thisparticular task has been termed Local Ancestry Inference(LAI).

Much of the early work in local ancestry inference tookform around the assumptions of hidden markov models. Whilethis method is computationally efficient, the markov assump-tions fail to model the correlation in inheritance between basepairs, or linkage disequilibrium (LD). Later models developedat Stanford, such as SABER [1] and HAPAA [2], explicitlymodel linkage disequilibrium within the HMM framework,by extending the dependence of the posterior probabilitiesto previous states even further behind in the chain. In doingso they are also computationally expensive. A later approachLAMP [3], utilized probability maximization within a slidingwindow, assuming that only one recombination event tookplace within each window. This is based on the fact that,biologically, ancestral haplotypes are inherited in blocks ofalleles, and thus between any two blocks there is a singlerecombination event. LAMP is considered amongst the goldstandard for LAI in recently admixed populations, such as theChinese and Japanese.

In 2013, the 1000 Genomes Projects Phase I released adata set of 1092 individuals from 14 populations that wasunprecedented in its detail, genomic completeness, and scope.Soon after, Maples, Gravel, Kenn, and Bustamante released adiscriminative model (uses p(Y|X) as opposed to p(x,y)) called

RFMix [4] which uses conditional random fields (CRF) basedon random forests within windowed sections of the chromo-some. RFMix was shown to be both faster and more accuratethan the existing LAMP method. The modern challenges forlocal ancestry inference are: efficiency in light of increasinglylarge and dense genomic data sets, discrimination betweenrecently divergent ancestries, and overall algorithm accuracy.

Understanding Ancestry Inference As the previous sectionillustrates, there is great variety in nature of the algorithmsimplemented for global ancestry inference, with differentlevels of performance depending on the admixing scenarioin question. Fundamentally, there are two approaches to thisproblem of ancestry inference. The first takes an entirely non-biological approach, treating this task as one analogous toidentifying which ancestry a particular sequence of nucleotideletters is most statistically related to. The second approach ishighly motivated by the biology of the genome, attemptingto incorporate mechanisms for recombination, mutation, etc.Most models in the field have been of the second type.

To explain in detail how these algorithms work in general,we take RFMix as an example. In the most basic framework,RFMix segments an input strand of DNA (a sequence of SNPs)from an admixed individual into contiguous windows of singlenucleotide polymorphisms (SNPs) and then assigns each ofthese windows of SNPs to one of several reference ancestries.This is shown in Figure 1. The statistics for determining theSNPs ancestry come from a training set of reference panels,which are entire sequences of SNPs that have been globallyassigned to one of the ancestries in consideration.

Fig. 1. Illustration of Ancestry Inference Problem [2] Two admixedchromosomes are shown with the true ancestries above and with the decodedancestries below. The admixed individuals are mixtures of 3 ancestral popu-lations.

RFMix has a second functionality. The previous approach isfast and works well when one has an abundance of referencepanels. This is, however, not typically the case because despite

Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf· 2017-09-23· Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (2)

large organized efforts of HapMap and the 1000 genomesproject, publicly available population-size data sets remainsparse. The admixed samples, on which RFMix is tested,itself contains ancestry information of SNPs, albeit in ajumbled form. RFMix thus is able to iteratively infer andthen incorporate ancestry information from the admixed (test)samples using EM.

Finally, RFMix models phase errors that are producedas part of the local ancestry inference and then attemptsto autocorrect these errors. In an example simulation thepaper provides, it was shown that, by this procedure, RFMixsignificantly improves long-range phasing error. By comparingthe fraction of SNP pairs correctly phased relative to eachother, the new phasing generated by RFMix achieved 75$ onthis metric, compared to 50% achieved by the original Beaglephased data. Beagle is a standard phasing algorithm that useshaplotype clustering methods.

II. DATA AND FEATURES

We were able to utilize some pre-processed data that theauthors of the RFMix paper provided. The data set consistsof 51213 SNPs from both chromosome ones of 362 individ-uals. The SNPs were assumed to be bi-allelic. The test setconsists of 10 admixed, Latino individuals, whose genomeswere created using a Wright-Fischer simulation to sample 12generations after admixture. The simulated Latino genomeswere generated from existing data sets and have 45% NativeAmerican (NAT), 50% European (HapMap CEU), and 5%African (Yoruba in Ibadan) ancestry. Other simulated sampleswere used to construct genomes of the reference panels, ofwhich there were 170 Native American (NAT), 194 African(YRI), and 340 European (CEU). The SNP’s used were createdto be perfectly phased, and so untangling phasing error wasnot a part of the following analysis.

A. Principal Component Analysis

Despite the high dimensionality of the data set, with eachtraining example containing 51213 SNP’s, the 3 separateancestries, Native American, African, and European couldvery easily be distinguished by a 2-3 component principalcomponent analysis, shown in Figure 2. The yellow admixedancestries indeed lie between the 3 ancestral populations in theprincipal component space. The yellow admixed individualsshow much larger variation within the group compared to anyof the ancestral populations. PCA also shows graphically, asexpected, that the admixed group as a whole is closer to NativeAmerican and European ancestries than to African. This isexpected given that the admixed individuals are on averageonly 5% African.

III. METHODS

We use a pipeline approach consisting of two steps. In thisfirst step we identify ’windows’ : sections of the genome thatare believed to be highly correlated to each other and thereforetend to be inherited together. Since they are inherited together,they will have the same population ancestry. Therefore, in our

Fig. 2. PCA The full training set of 85 Native American (NAT, red), 97African (YRI, blue), and 170 European (CEU, green) individuals projectedonto the first 3 principal axes. The 10 admixed individuals are shown inyellow.

second step we classify the windows that we have identifiedinto one of the 3 source populations.

In our paper we use a simple heuristic for identifying thewindows. We divide the chromosome into windows of fixedcenti-morgans. By the definition of a centi-mogran, there isthus a variable number of SNPs per window, but they aregrouped according to the average number of chromosomalcrossovers expected within the group.

Having identifies the windows, for the second step, we usea variety of classifiers to correctly classify the window of anadmixed genome into the correct population ancestry basedon its similarity with the corresponding windows from thereference panel.

The full training set consists of 85 Native American (NAT),97 African (YRI), and 170 European (CEU) individuals. Amore moderate and realistic training set consists of 30 NativeAmerican (NAT), 30 African (YRI), and 30 European (CEU)individuals. Finally, the extreme case in which one has ascarcity of well-sequenced reference panels is represented by 3training examples of each ancestry, 3 Native American (NAT),3 African (YRI), and 3 European (CEU) individuals. In reality,the possibility of having such large cohorts of, accuratelysequenced data is unlikely given modern sequencing tech-nologies. There is also increasingly a push to move beyondthe heavy reliance on reference panels in order to performancestry inference.

A. Manhattan Method

The first classification method we implemented was a sim-ple criteria of determining how closely related two sequencesof nucleotides are. We devised a notion of similarity betweenwindows in the reference samples and those in the admixedsamples by counting the number of replacements needed toget from one window to another. For example if in some

Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf· 2017-09-23· Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (3)

reference window one has 0 0 1 0, and in an admixedwindow, 0 1 1 0, the number of replacements needed is justone. The fewer replacements needed to convert between thesequences the more similar they are. This is the gist of the so-called Manhattan metric. We identify the windows amongst thereference panels to which the admixed window has the highestsimilarity. We then use a voting scheme where the ancestry ofthe admixed window is assigned to the reference populationin which it has largest number of the highest similarity values.

A result of the algorithm labeled ancestry when the windowsize=2 cM is shown in Figure 3 below for the entire length ofone chromosome of one admixed individual:

Fig. 3. Manhattan Method Labeling Plots showing the ancestry labels ofeach of the 51231 SNP’s under consideration for windows of 2 cM of asingle admixed chromosome. The red bars show the true ancestry while theblue over-layed lines show the ancestry predicted by our Manhattan algorithma) Compares the ancestry labels when the algorithm is trained on a set of 30individuals of each ancestry b) shows the same when trained on a set of 3individuals of each ancestry. Note that there are no SNPs inherited from YRIsimply because the admixed genome under review doesn’t have any.

In Figure 3a, we see that, when trained on moderately largedata sets of 30 individuals of each ancestry, the Manhattanmethod is extremely accurate at predicting ancestry. Thehaploblocks are large and the Manhattan method finds thecorrect label but only up to small shifts. It similarly misseschanges in the ancestry that occur over just a few SNP’s.The overall accuracy here of the Manhattan method is around96.5% compared to 97.5% achieved by RFMix.

On the other hand, the algorithm performs much morepoorly when training on a smaller data set of only 3 in-dividuals. Although the overall accuracy of labeling is stillrelatively high at 81.18%, it’s clear from Figure 3b that theManhattan method does very little to infer the overall shape

of the haploblocks. RFMix achieves 87.8% accuracy but caniteratively incorporate the admixed predictions into EM toboost performance.

Varying window size Because inheritance of genes takesplace through haploblocks, each of a single ancestry, choosingthe correct window size is essential for achieving optimalperformance. The result of varying window size on overallaccuracy is shown in Figure 4 when using the full trainingset.

Fig. 4. Manhattan Method Window Size After training on the full referencepanels, the overall accuracy of the Manhattan algorithm is shown as a functionof window size

The results suggest very high performance, compared toRFMix, peaking at around 98.4% accuracy for a window sizeof 1.0 cM. For the same window size RFMix uses, 0.2 cM,the accuracy is only 88.4%. As window size is increased, theaccuracy peaks and then falls rapidly. It is important hereto keep in mind that the benchmark accuracy, that achievedby random guessing, is already 33.3%, given that we have 3ancestral populations.

B. Support Vector Machine

As a point of comparison we also applied a support vectormachine (SVM) classifier to our data. Again we take theapproach of fixed window size and use the SVM on trainingdata to classify vectors with length equal to the number ofSNP’s that exist within each window. The results below aretrained on the full set of reference panels. We find that theperformance of the SVM depends significantly on parameterslike the type of kernel employed, the window size, and thevalue of an internal parameter C which is explained below.Figure 5 compares the overall accuracy of the SVM using alinear kernel versus that of one using the radial basis kernelfor different window sizes.

It is evident from the figure that the linear kernel outper-forms the radial basis kernel (with the default parameters) atall plausible window sizes. To investigate this further we notethat the SVM with the radial basis kernel depends on twoparameters, namely C and gamma, we vary the parameters,

Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf· 2017-09-23· Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (4)

Fig. 5. SVM Accuracy vs. window size A plot of the overall accuracyachieved as a function of the fixed window size (centi-morgans) used, for theradial basis kernel and the linear kernel. In both SVM’s are trained on thefull set of reference panels.

fixing the window size at 1.4 cM. For the radial basis kernel,K(x, y) = exp−γ|x−y|

2

. Gamma thus determines how muchweight to put on a single training data for a given euclideandistance between that single training data and the test data.The larger γ is, the more weight placed on training data thatare closer in distance to the test data.

We found that as we decreased gamma, the accuracy in-creases. For γ = 0.3, the accuracy is 96%, and for γ = 0.15,the accuracy is 97.2%). We conjecture that this takes placebecause when gamma is smaller, more of the training datais taken into account. Another factor of consideration isthat the Euclidean metric may not be the best indicator ofhow far a given test point is from a training data point.The discrete Manhattan distance may characterize the notionof distance between two sequences more functionally andincrease classifier performance.

Another large determinant of the SVM’s performance is thevalue of the internal parameter C. The parameter C controlsthe tradeoff between classification correctness on the trainingdata and the largeness of the largest minimal margin. A largevalue of C indicates a willingness to increase the classifier’saccuracy rating by giving weight to outlier training examplesthat are quite far from the mean of the data. In a general sense,a large C value tolerates overfitting behavior. As expected, aswe increase C, the accuracy of the SVM increases as shownin Figure 6. Here we are using the radial basis kernel whiletraining on the full reference panels.

Finally, we evaluated the performance of our SVM’s usingonly small numbers of training data. We first tested with 30training examples from each ancestry, and in that case, we getabout 96% overall accuracy for the SVM with the linear kerneland the SVM with the radial basis kernel. This is quite a smallreduction from the SVM performance on the full set of training

Fig. 6. SVM Accuracy vs. parameter ’C’ For an SVM using the radialbasis kernel, we plot the overall accuracy as a function of the internal SVMparameter ’C’, whose function is also explained below. The training set is thefull set of reference panels.

data, and indicates that in the regime of large reference panelswe are gaining very little performance by adding more panels.On the other hand, when we evaluate the performance usingthe extreme scenario of 3 training reference panels from eachgroup, we achieve 76% accuracy using the SVM with radialbasis and 82% accuracy using the SVM with linear kernel.Again, the linear kernel yields superior performance to theradial basis kernel.

C. Random Forest

We use an ensemble of trees to make prediction on thewindows. The random forest generates multiple decision treesand take the average vote to predict the label of test data. Asthe number of decision trees increase, the accuracy increases,but as a drawback, the run time also increases. We also observethat increasing the number of trees does not tend to cause over-fitting easily.

We then tested the random forest method using onlythree training examples. We set the number of estimatorsas

√window length/Nc and vary Nc from 3 to 0.05. For

Nc = 3, we get 69% accuracy, and for Nc = 0.05, we get80% accuracy.

In Figure 7, we plot the accuracies of Manhattan method,SVM method, and the random forests method as a functionof the window size (we use all the training data.) We see thatfor all methods, the accuracies peak at around window size1cM. For a smaller number of window size (including 1cM),Manhattan method and random forest method perform betterthan SVM.

D. Hidden Markov Model

We use hidden Markov model. State i is the ancestry(African, European, etc) at the i-th position of a haplotype,

Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf· 2017-09-23· Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (5)

Fig. 7. Accuracy of Manhattan method, SVM method, and the random forestsmethod vs. window size

and the observed variable is SNP at the i-th position of ahaplotype.

The HMM requires three probability matrices. One is theprobability of each hidden state, and the second is the emissionprobability, and the third is the transition probability from onestate to another state.

For the first probability, we assume every ancestries areequally likely. Secondly, to estimate the probability of emis-sion probability of i-th state, we use the empirical probabilityin the reference haplotypes. Note that this emission probabilityis not stationary,i.e., it depends on i. Lastly, we assume withprobability 0.9, there is a transition from one ancestry toanother ancestry at time i, and for the remaining probability,there is a transition to a new ancestry with equal probabilities.

Using this approach we get only get a 0.506% accuracy.This is because the HMM does not pay any attention to theindex of the SNP and is therefore not able to capture thedistribution of the specific columns in the data. We also noticethat the predictions are skewed n favor of population 3 becauseof its high start probability and the low transition probabilities.

IV. ERROR ANALYSIS

We describe two independent sources of error in not justour classifiers, but also other more complex local ancestryinference algorithms:

1) Windowing of the SNP’s. In perfect windowing, allSNP’s within a window originate from not just the sameancestry but also the same ancestral individual withina population. If SNP’s from two different ancestriesfall within the same window, then we will inevitablymisclassify one of the two segments within that window.Alternatively, even if a window contained two segmentsfrom the same ancestry, but from different people, anysimilarity measure may fail. Because said similaritymeasures only compare a given test window against the

Classifier train-size=3 train-size=30 full-trainSVM 0.99 0.99 0.99Random Forest 0.85 0.99 0.99

TABLE ITHE PERFORMANCE OF SVM, AND RANDOM FORESTS WHEN EVALUATED

ON THE TRUE WINDOWS

Classifier train-size=3 train-size=30 full-trainSVM 0.78 0.96 0.97Random Forest 0.84 0.97 0.97

TABLE IITHE PERFORMANCE OF SVM, AND RANDOM FORESTS WHEN EVALUATED

ON THE THE HEURISTIC BASED WINDOWS

corresponding training window of a single individual,the classification will not be ideal, and may lead toerrors.

2) Assuming now, that windowing is correct, an inde-pendent source of error is classification error within agiven window. This error exists because in a real worldscenario, the reference panels used to train the classifierare not directly ancestors of the admixed individuals.

We sought to investigate whether the majority of error in oursimulated data came from the first or the second source. To thisend, we used the true windows of an admixed individual whiletraining the classifier, instead of the fixed centimorgan windowsizes we had been using. We then again tested with differentclassifiers and training sizes. Comparing tables I and II, wefind that we can achieve near perfect performance if we aregiven the correct admixed windows. This is the case even whenthe number of reference panels is very few, 3 per ancestry.Thus we find that it is in fact the windowing algorithm that isthe main bottleneck in our approach and further work shouldbe devoted to this step of the process. Various of the morerecently published algorithms such as WinPop take steps todeliberately optimize the search for the best window length ateach locus along the chromosome.

V. CONCLUDING REMARKS

Our conclusion from running various different learningalgorithms, is that a large majority of them work very well(above 95% accuracy) given an abundance of reference paneltraining data. This is true for even relatively simple algorithmssuch as the one based on the Manhattan metric. When the num-ber of reference panels is few however, EM is valuable methodfor iteratively improving performance. Furthermore our erroranalysis suggests that a large proportion of the error comesfrom poor choices of windowing. By windowing more ideally,many algorithms can achieve near perfect performance evenwhen reference panels are scarce. Thus developing methodsfor judiciously choosing window size are an important effortin local ancestry inference.

REFERENCES

[1] H. Tang, M. Coram, P. Wang, X. Zhu, and N. Risch,“Reconstructing genetic ancestry blocks in admixed in-dividuals,” American Journal of Human Genetics, 2006.

Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf· 2017-09-23· Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (6)

[2] A. Sundquist, E. Fratkin, C. B. Do, and S. Batzoglou,“Effect of genetic divergence in identifying ancestralorigin using hapaa,” Genome Research, vol. 18,no. 4, pp. 676–682, 04 2008. [Online]. Available:http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2279255/

[3] S. Sankararaman, S. Sridhar, G. Kimmel, and E. Halperin,“Estimating local ancestry in admixed populations,” Amer-ican Journal of Human Genetics, 2008.

[4] B. K. Maples, S. Gravel, E. E. Kenny, and C. D.Bustamante, “Rfmix: A discriminative modeling approachfor rapid and robust local-ancestry inference,” TheAmerican Journal of Human Genetics, vol. 93,no. 2, pp. 278–288, 2015/12/10. [Online]. Available:http://dx.doi.org/10.1016/j.ajhg.2013.06.020

Local Ancestry Inference in Admixed Populations - …cs229.stanford.edu/proj2015/290_report.pdf · 2017-09-23 · Local Ancestry Inference in Admixed Populations ... the 1000 Genomes - [PDF Document] (2024)

FAQs

What is local ancestry inference? ›

Local ancestry inference (LAI) is the process of decomposing admixed chromosomes into these ancestral chunks and assigning each chunk an ancestry label. Many tools are available that perform LAI due to its importance for understanding population structure, migration history and disease risks [3, 4].

What is the difference between global and local ancestry? ›

Global ancestry is characterized by 10 PCs derived from all SNPs across the genome regardless of linkage disequilibrium (LD) between the markers. Similarly, local ancestry is based on 10 PCs derived from SNPs in non-overlapping 20-Mbp regions in the genome, yielding 141 distinct regions.

What is genetically inferred ancestry? ›

Genetic ancestry, inferred from genomic data, is a quantifiable biological parameter. While much of the human genome is identical across populations, it is estimated that as much as 0.4% of the genome can differ due to ancestry.

What 3 things suggest common ancestry? ›

Fossils, anatomy, embryos, and DNA sequences provide corroborative lines of evidence about common ancestry, with more closely related organisms having more characteristics in common. DNA underlies the similarities and differences in fossils, anatomy, and embryos.

What does admixed American mean? ›

Admixed American populations have different global proportions of European, Sub-Saharan African, and Native-American ancestry. However, individuals who display the same global ancestry could exhibit remarkable differences in the distribution of local ancestry blocks.

Does ancestry DNA tell you your nationality? ›

Your genetic ethnicity estimate tells you about your possible historical origins, not necessarily about where you live today. AncestryDNA ® genetic ethnicity estimates go back hundreds to more than a thousand years, when populations and the boundaries they claimed were often very different.

How is ancestry different from race? ›

Ancestry is another way to characterize individuals beyond race or ethnicity. Ancestry can be defined geographically, genealogically, or genetically, and can suffer limitations similar to race or ethnicity. Geographic ancestry refers to ancestors originating from similar geographic regions.

Is there a difference between ancestry and heritage? ›

The difference between these two statements about ancestry and heritage is fundamental and can be important. A lot has been written about these two concepts. Here, however, I want to simply say that ancestry is about what you are, while heritage is about how you are.

Can DNA tests show inbreeding? ›

Widespread genetic testing is uncovering case after secret case of children born to close biological relatives—providing an unprecedented accounting of incest in modern society.

How much DNA do you need to claim ethnicity? ›

Answer and Explanation: Human DNA has no 'racial markers. ' Every person possesses the same human DNA. The only way to determine a person's nationality is to look at their parents.

Why is 23andMe not accurate for ancestry? ›

For a given segment of DNA, 70% confidence in an assignment means there's still a 30% chance the ancestry of that segment may be something slightly different: For example, instead of “Italian” ancestry (70% chance), maybe we instead predict “French & German” ancestry (30% chance) for that segment, because one of your ...

What do ancestry tests tell you? ›

Genetic ancestry testing can provide insights on the geographic origins of an individual's ancestors, as well as some information that can aid in assessment of risk for some heritable conditions. The accuracy of testing is limited by the migrations and mixing of populations over time.

Is ancestry inheritance random? ›

DNA is passed down randomly

Each parent passed down half of their DNA to you. This means that there's a half of their DNA that you didn't inherit. Inheriting half of a parent's DNA doesn't mean inheriting half of each ethnicity. The DNA you inherit is random.

Does ancestry collect data? ›

Ancestry collects records from various sources, usually from official record sources, including newspapers, as well as birth, death, marriage, and census records, which may contain Personal Information relating to you. Public and historical records may also contain Personal Information relating to non-Ancestry users.

Why is ancestry accurate? ›

Some relationships are difficult to assign because many people happen to share the same amount of DNA. For example, a grandparent and grandchild, a double first cousin, a half-sibling, or an aunt or uncle all share around 25 percent of their DNA with each other. This obviously makes them hard to tell apart DNA-wise.

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Dan Stracke

Last Updated:

Views: 5311

Rating: 4.2 / 5 (43 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Dan Stracke

Birthday: 1992-08-25

Address: 2253 Brown Springs, East Alla, OH 38634-0309

Phone: +398735162064

Job: Investor Government Associate

Hobby: Shopping, LARPing, Scrapbooking, Surfing, Slacklining, Dance, Glassblowing

Introduction: My name is Dan Stracke, I am a homely, gleaming, glamorous, inquisitive, homely, gorgeous, light person who loves writing and wants to share my knowledge and understanding with you.