It´s possible to apply cross_val_score() form sklearn to neupy NN that has an addon like Weigth Elimination?











up vote
0
down vote

favorite












I´m trying to apply cross_val_score() to the following algorithm:



cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination])

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores=cross_val_score(cgnet, XTrainScaled,yTrainScaled,scoring='neg_mean_absolute_error',cv=kfold,verbose=10)
print scores
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))


And this is the error message I get:



TypeError: Cannot create a consistent method resolution
order (MRO) for bases LevenbergMarquardtWeightElimination, WeightElimination


Without WeightElimination or any other addon, cross_val_score(), works fine...Is there another way to do this? Thank you










share|improve this question






















  • What neupy and python versions do you use?
    – itdxer
    Nov 21 at 14:06












  • neupy 0.6.5 and numpy 1.15.1
    – Manuel Almeida
    Nov 21 at 16:07















up vote
0
down vote

favorite












I´m trying to apply cross_val_score() to the following algorithm:



cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination])

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores=cross_val_score(cgnet, XTrainScaled,yTrainScaled,scoring='neg_mean_absolute_error',cv=kfold,verbose=10)
print scores
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))


And this is the error message I get:



TypeError: Cannot create a consistent method resolution
order (MRO) for bases LevenbergMarquardtWeightElimination, WeightElimination


Without WeightElimination or any other addon, cross_val_score(), works fine...Is there another way to do this? Thank you










share|improve this question






















  • What neupy and python versions do you use?
    – itdxer
    Nov 21 at 14:06












  • neupy 0.6.5 and numpy 1.15.1
    – Manuel Almeida
    Nov 21 at 16:07













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I´m trying to apply cross_val_score() to the following algorithm:



cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination])

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores=cross_val_score(cgnet, XTrainScaled,yTrainScaled,scoring='neg_mean_absolute_error',cv=kfold,verbose=10)
print scores
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))


And this is the error message I get:



TypeError: Cannot create a consistent method resolution
order (MRO) for bases LevenbergMarquardtWeightElimination, WeightElimination


Without WeightElimination or any other addon, cross_val_score(), works fine...Is there another way to do this? Thank you










share|improve this question













I´m trying to apply cross_val_score() to the following algorithm:



cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination])

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores=cross_val_score(cgnet, XTrainScaled,yTrainScaled,scoring='neg_mean_absolute_error',cv=kfold,verbose=10)
print scores
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))


And this is the error message I get:



TypeError: Cannot create a consistent method resolution
order (MRO) for bases LevenbergMarquardtWeightElimination, WeightElimination


Without WeightElimination or any other addon, cross_val_score(), works fine...Is there another way to do this? Thank you







scikit-learn cross-validation levenberg-marquardt neupy






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 20 at 13:27









Manuel Almeida

9216




9216












  • What neupy and python versions do you use?
    – itdxer
    Nov 21 at 14:06












  • neupy 0.6.5 and numpy 1.15.1
    – Manuel Almeida
    Nov 21 at 16:07


















  • What neupy and python versions do you use?
    – itdxer
    Nov 21 at 14:06












  • neupy 0.6.5 and numpy 1.15.1
    – Manuel Almeida
    Nov 21 at 16:07
















What neupy and python versions do you use?
– itdxer
Nov 21 at 14:06






What neupy and python versions do you use?
– itdxer
Nov 21 at 14:06














neupy 0.6.5 and numpy 1.15.1
– Manuel Almeida
Nov 21 at 16:07




neupy 0.6.5 and numpy 1.15.1
– Manuel Almeida
Nov 21 at 16:07












1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










It looks like function cross_val_score won't work in neupy, but you can run the same code in slightly different way.



import numpy as np
from neupy import algorithms, layers
from sklearn.model_selection import *
from sklearn import metrics

XTrainScaled = XTrain = np.random.random((10, 2))
yTrainScaled = np.random.random((10, 1))

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores =

for train, test in kfold.split(XTrainScaled):
x_train, x_test = XTrainScaled[train], XTrainScaled[test]
y_train, y_test = yTrainScaled[train], yTrainScaled[test]

cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)
],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination]
)

cgnet.train(x_train, y_train, epochs=5)
y_predicted = cgnet.predict(x_test)

score = metrics.mean_absolute_error(y_test, y_predicted)
scores.append(score)

print(scores)
scores = np.array(scores)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))





share|improve this answer





















  • Thank you, best regards
    – Manuel Almeida
    Nov 21 at 16:38






  • 1




    @ManuelAlmeida The issue seems to be in the fact that sklearn wants to clone the supplied estimators during the cross_val_score but neupy has something weird going on in its code to initialize the models from clone. You should file this as an issue on neupy's github page
    – Vivek Kumar
    Nov 22 at 7:07










  • Thank you, I will..
    – Manuel Almeida
    Nov 22 at 10:42











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53394047%2fit%25c2%25b4s-possible-to-apply-cross-val-score-form-sklearn-to-neupy-nn-that-has-an-ad%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote



accepted










It looks like function cross_val_score won't work in neupy, but you can run the same code in slightly different way.



import numpy as np
from neupy import algorithms, layers
from sklearn.model_selection import *
from sklearn import metrics

XTrainScaled = XTrain = np.random.random((10, 2))
yTrainScaled = np.random.random((10, 1))

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores =

for train, test in kfold.split(XTrainScaled):
x_train, x_test = XTrainScaled[train], XTrainScaled[test]
y_train, y_test = yTrainScaled[train], yTrainScaled[test]

cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)
],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination]
)

cgnet.train(x_train, y_train, epochs=5)
y_predicted = cgnet.predict(x_test)

score = metrics.mean_absolute_error(y_test, y_predicted)
scores.append(score)

print(scores)
scores = np.array(scores)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))





share|improve this answer





















  • Thank you, best regards
    – Manuel Almeida
    Nov 21 at 16:38






  • 1




    @ManuelAlmeida The issue seems to be in the fact that sklearn wants to clone the supplied estimators during the cross_val_score but neupy has something weird going on in its code to initialize the models from clone. You should file this as an issue on neupy's github page
    – Vivek Kumar
    Nov 22 at 7:07










  • Thank you, I will..
    – Manuel Almeida
    Nov 22 at 10:42















up vote
2
down vote



accepted










It looks like function cross_val_score won't work in neupy, but you can run the same code in slightly different way.



import numpy as np
from neupy import algorithms, layers
from sklearn.model_selection import *
from sklearn import metrics

XTrainScaled = XTrain = np.random.random((10, 2))
yTrainScaled = np.random.random((10, 1))

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores =

for train, test in kfold.split(XTrainScaled):
x_train, x_test = XTrainScaled[train], XTrainScaled[test]
y_train, y_test = yTrainScaled[train], yTrainScaled[test]

cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)
],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination]
)

cgnet.train(x_train, y_train, epochs=5)
y_predicted = cgnet.predict(x_test)

score = metrics.mean_absolute_error(y_test, y_predicted)
scores.append(score)

print(scores)
scores = np.array(scores)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))





share|improve this answer





















  • Thank you, best regards
    – Manuel Almeida
    Nov 21 at 16:38






  • 1




    @ManuelAlmeida The issue seems to be in the fact that sklearn wants to clone the supplied estimators during the cross_val_score but neupy has something weird going on in its code to initialize the models from clone. You should file this as an issue on neupy's github page
    – Vivek Kumar
    Nov 22 at 7:07










  • Thank you, I will..
    – Manuel Almeida
    Nov 22 at 10:42













up vote
2
down vote



accepted







up vote
2
down vote



accepted






It looks like function cross_val_score won't work in neupy, but you can run the same code in slightly different way.



import numpy as np
from neupy import algorithms, layers
from sklearn.model_selection import *
from sklearn import metrics

XTrainScaled = XTrain = np.random.random((10, 2))
yTrainScaled = np.random.random((10, 1))

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores =

for train, test in kfold.split(XTrainScaled):
x_train, x_test = XTrainScaled[train], XTrainScaled[test]
y_train, y_test = yTrainScaled[train], yTrainScaled[test]

cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)
],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination]
)

cgnet.train(x_train, y_train, epochs=5)
y_predicted = cgnet.predict(x_test)

score = metrics.mean_absolute_error(y_test, y_predicted)
scores.append(score)

print(scores)
scores = np.array(scores)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))





share|improve this answer












It looks like function cross_val_score won't work in neupy, but you can run the same code in slightly different way.



import numpy as np
from neupy import algorithms, layers
from sklearn.model_selection import *
from sklearn import metrics

XTrainScaled = XTrain = np.random.random((10, 2))
yTrainScaled = np.random.random((10, 1))

kfold = KFold(n_splits=5, shuffle=True, random_state=7)
scores =

for train, test in kfold.split(XTrainScaled):
x_train, x_test = XTrainScaled[train], XTrainScaled[test]
y_train, y_test = yTrainScaled[train], yTrainScaled[test]

cgnet = algorithms.LevenbergMarquardt(
connection=[
layers.Input(XTrain.shape[1]),
layers.Linear(6),
layers.Linear(1)
],
mu_update_factor=2,
mu=0.1,
shuffle_data=True,
verbose=True,
decay_rate=0.1,
addons=[algorithms.WeightElimination]
)

cgnet.train(x_train, y_train, epochs=5)
y_predicted = cgnet.predict(x_test)

score = metrics.mean_absolute_error(y_test, y_predicted)
scores.append(score)

print(scores)
scores = np.array(scores)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 21 at 14:39









itdxer

8541030




8541030












  • Thank you, best regards
    – Manuel Almeida
    Nov 21 at 16:38






  • 1




    @ManuelAlmeida The issue seems to be in the fact that sklearn wants to clone the supplied estimators during the cross_val_score but neupy has something weird going on in its code to initialize the models from clone. You should file this as an issue on neupy's github page
    – Vivek Kumar
    Nov 22 at 7:07










  • Thank you, I will..
    – Manuel Almeida
    Nov 22 at 10:42


















  • Thank you, best regards
    – Manuel Almeida
    Nov 21 at 16:38






  • 1




    @ManuelAlmeida The issue seems to be in the fact that sklearn wants to clone the supplied estimators during the cross_val_score but neupy has something weird going on in its code to initialize the models from clone. You should file this as an issue on neupy's github page
    – Vivek Kumar
    Nov 22 at 7:07










  • Thank you, I will..
    – Manuel Almeida
    Nov 22 at 10:42
















Thank you, best regards
– Manuel Almeida
Nov 21 at 16:38




Thank you, best regards
– Manuel Almeida
Nov 21 at 16:38




1




1




@ManuelAlmeida The issue seems to be in the fact that sklearn wants to clone the supplied estimators during the cross_val_score but neupy has something weird going on in its code to initialize the models from clone. You should file this as an issue on neupy's github page
– Vivek Kumar
Nov 22 at 7:07




@ManuelAlmeida The issue seems to be in the fact that sklearn wants to clone the supplied estimators during the cross_val_score but neupy has something weird going on in its code to initialize the models from clone. You should file this as an issue on neupy's github page
– Vivek Kumar
Nov 22 at 7:07












Thank you, I will..
– Manuel Almeida
Nov 22 at 10:42




Thank you, I will..
– Manuel Almeida
Nov 22 at 10:42


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53394047%2fit%25c2%25b4s-possible-to-apply-cross-val-score-form-sklearn-to-neupy-nn-that-has-an-ad%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

404 Error Contact Form 7 ajax form submitting

How to know if a Active Directory user can login interactively

Refactoring coordinates for Minecraft Pi buildings written in Python