Open closed principle in a method

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
2
down vote

favorite












I have an ASP.NET MVC project wherein I need to follow the open closed principles.



The project converts a .csv file to a model from a database, but in the future we might also have to convert Excel files to the same model from the database.



Now, I have this code in the Convertor class:



public class Convertor 

private static ICompanyRepository companyRepository;

/// <summary>
/// converts the uploaded csv data to Company model
/// </summary>
/// <param name="filePath">the csv data</param>
/// <returns>a list of Compamy model</returns>
public List<Company> ConvertCsvToCompanyModel(string filePath)

companyRepository = new CompanyRepository(new ImportContext());
List<Company> companies = new List<Company>();

//Read the contents of CSV file.
string csvData = System.IO.File.ReadAllText(filePath);

//we skip the first row, because it contain the header
var csvLines = csvData.Split('n').Skip(1);

//Execute a loop over the rows.
foreach (string row in csvLines)

if (!string.IsNullOrEmpty(row))

if (!companyRepository.CompanyExist(row.Split(',')[0]))//check if already contains the ExternalId

companies.Add(new Company

//CounterPartId
ExternalId = row.Split(',')[0],
//Name
TradingName = row.Split(',')[1],
//IsBuyer
IsForwarder = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[2])),
//IsSeller
IsCarrier = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[3])),
//Phone
Phone = row.Split(',')[4],
//Fax
Fax = row.Split(',')[5]
);



return companies;




Can you please give me some hints how to change this code in order to follow the open closed principle? In the future, as I mentioned, it is possible to need a converter for Excel files, too.







share|improve this question



























    up vote
    2
    down vote

    favorite












    I have an ASP.NET MVC project wherein I need to follow the open closed principles.



    The project converts a .csv file to a model from a database, but in the future we might also have to convert Excel files to the same model from the database.



    Now, I have this code in the Convertor class:



    public class Convertor 

    private static ICompanyRepository companyRepository;

    /// <summary>
    /// converts the uploaded csv data to Company model
    /// </summary>
    /// <param name="filePath">the csv data</param>
    /// <returns>a list of Compamy model</returns>
    public List<Company> ConvertCsvToCompanyModel(string filePath)

    companyRepository = new CompanyRepository(new ImportContext());
    List<Company> companies = new List<Company>();

    //Read the contents of CSV file.
    string csvData = System.IO.File.ReadAllText(filePath);

    //we skip the first row, because it contain the header
    var csvLines = csvData.Split('n').Skip(1);

    //Execute a loop over the rows.
    foreach (string row in csvLines)

    if (!string.IsNullOrEmpty(row))

    if (!companyRepository.CompanyExist(row.Split(',')[0]))//check if already contains the ExternalId

    companies.Add(new Company

    //CounterPartId
    ExternalId = row.Split(',')[0],
    //Name
    TradingName = row.Split(',')[1],
    //IsBuyer
    IsForwarder = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[2])),
    //IsSeller
    IsCarrier = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[3])),
    //Phone
    Phone = row.Split(',')[4],
    //Fax
    Fax = row.Split(',')[5]
    );



    return companies;




    Can you please give me some hints how to change this code in order to follow the open closed principle? In the future, as I mentioned, it is possible to need a converter for Excel files, too.







    share|improve this question























      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      I have an ASP.NET MVC project wherein I need to follow the open closed principles.



      The project converts a .csv file to a model from a database, but in the future we might also have to convert Excel files to the same model from the database.



      Now, I have this code in the Convertor class:



      public class Convertor 

      private static ICompanyRepository companyRepository;

      /// <summary>
      /// converts the uploaded csv data to Company model
      /// </summary>
      /// <param name="filePath">the csv data</param>
      /// <returns>a list of Compamy model</returns>
      public List<Company> ConvertCsvToCompanyModel(string filePath)

      companyRepository = new CompanyRepository(new ImportContext());
      List<Company> companies = new List<Company>();

      //Read the contents of CSV file.
      string csvData = System.IO.File.ReadAllText(filePath);

      //we skip the first row, because it contain the header
      var csvLines = csvData.Split('n').Skip(1);

      //Execute a loop over the rows.
      foreach (string row in csvLines)

      if (!string.IsNullOrEmpty(row))

      if (!companyRepository.CompanyExist(row.Split(',')[0]))//check if already contains the ExternalId

      companies.Add(new Company

      //CounterPartId
      ExternalId = row.Split(',')[0],
      //Name
      TradingName = row.Split(',')[1],
      //IsBuyer
      IsForwarder = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[2])),
      //IsSeller
      IsCarrier = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[3])),
      //Phone
      Phone = row.Split(',')[4],
      //Fax
      Fax = row.Split(',')[5]
      );



      return companies;




      Can you please give me some hints how to change this code in order to follow the open closed principle? In the future, as I mentioned, it is possible to need a converter for Excel files, too.







      share|improve this question













      I have an ASP.NET MVC project wherein I need to follow the open closed principles.



      The project converts a .csv file to a model from a database, but in the future we might also have to convert Excel files to the same model from the database.



      Now, I have this code in the Convertor class:



      public class Convertor 

      private static ICompanyRepository companyRepository;

      /// <summary>
      /// converts the uploaded csv data to Company model
      /// </summary>
      /// <param name="filePath">the csv data</param>
      /// <returns>a list of Compamy model</returns>
      public List<Company> ConvertCsvToCompanyModel(string filePath)

      companyRepository = new CompanyRepository(new ImportContext());
      List<Company> companies = new List<Company>();

      //Read the contents of CSV file.
      string csvData = System.IO.File.ReadAllText(filePath);

      //we skip the first row, because it contain the header
      var csvLines = csvData.Split('n').Skip(1);

      //Execute a loop over the rows.
      foreach (string row in csvLines)

      if (!string.IsNullOrEmpty(row))

      if (!companyRepository.CompanyExist(row.Split(',')[0]))//check if already contains the ExternalId

      companies.Add(new Company

      //CounterPartId
      ExternalId = row.Split(',')[0],
      //Name
      TradingName = row.Split(',')[1],
      //IsBuyer
      IsForwarder = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[2])),
      //IsSeller
      IsCarrier = Convert.ToBoolean(Enum.Parse(typeof(BooleanAliases), row.Split(',')[3])),
      //Phone
      Phone = row.Split(',')[4],
      //Fax
      Fax = row.Split(',')[5]
      );



      return companies;




      Can you please give me some hints how to change this code in order to follow the open closed principle? In the future, as I mentioned, it is possible to need a converter for Excel files, too.









      share|improve this question












      share|improve this question




      share|improve this question








      edited Jan 7 at 20:35









      Jamal♦

      30.1k11114225




      30.1k11114225









      asked Jan 7 at 19:11









      User1111

      111




      111




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          2
          down vote













          Before reviewing micro-design we may focus on overall architecture. You already know that in future you will also need to read an Excel file, it may be a good moment to introduce an abstract base class (or an interface) to abstract this detail from the client:



          public abstract class Convertor

          public abstract IEnumerable<Company> ToCompanyModel(string path);



          Note few changes:



          • I'm using Convertor as base class name then you will have, for example, CsvConvertor, no need to repeat any of those information in the method name.

          • I'm returning IEnumerable<Company> instead of List<Company>. The effective type you're using to store the result is an implementation detail, here I picked the most generic one but you may return IList<Company> as well (what if, for example, you will decide to store Company as an hash-table to detect duplicates?)

          How to create the right Convertor instance? I assume you have DI in-place then you may have a factory class which will create the right instance according to file extension (assuming you're sure you will need Excel files). Something similar to:



          public interface IConvertorFactory

          Convertor Create(string path);



          If not then simply use DI to get the converter instance without any factory. You now have another problem: testing. Do you want to unit test your converter using physical files? Of course you need to but it's also handy to work with in-memory representations then you may add an overload with StreamReader:



          public abstract class Convertor

          public abstract IEnumerable<Company> ToCompanyModel(string path);
          public abstract IEnumerable<Company> ToCompanyModel(StreamReader stream);



          Default implementation may simply open the stream reading line-by-line and you won't need to worry about file size because it won't read everything in memory. Another benefit is that you can check for reader logic without mixing with I/O logic (and this is good to write extensive testing).




          ICompanyRepository companyRepository has no reason to be static, you just make your class not thread-safe without any other benefit.



          You're not doing any error handling. Things may go wrong and caller will get unknown exception. You'd better handle possible errors and return a single well-known exception (let's say InvalidDataException) with all the required details. Do not forget to document exceptions you may throw. You may even decide to ignore errors in one row and continue processing.



          You're splitting each line with row.Split() multiple times, it's a waste: do it once.



          To reduce nesting you can use continue and filters (example, again, without error handling):



          foreach (string row in csvLines.Where(x => !String.IsNullOrEmpty(x))

          var fields = row.Split(',');

          var externalId = fields[0];

          if (companyRepository.CompanyExist(externalId))
          continue;

          // ...



          However the truth is that you do not need to do CSV parsing by hand. In Microsoft.VisualBasic assembly (if you do not want to use an external library) you already have a well-tested and complete implementation:



          using (var parser = new TextFieldParser(new StringReader(stream))) 
          {
          parser.SetDelimiters(new string "," );

          // Skip header
          parser.ReadLine();

          while (!parser.EndOfData)

          var fields = parser.ReadFields();

          // Same as before



          Note that you can catch MalformedLineException to handle errors and you can specify exactly the expected type of each field (if you want to). Also text fields may be enclosed in quotes, use TextFieldParser.HasFieldsEnclosedInQuotes property to instruct the parser. Blank lines are, by default, skipped and you may even use comments (see TextFieldParser.CommentTokens property).



          Very last note: if CSV file is generated by Excel then be aware that it won't always use the comma as delimiter but the current list separator character (see CultureInfo.TextInfo.ListSeparator).






          share|improve this answer






























            up vote
            -2
            down vote













            When you actually need the Excel feature, you'll know exactly what you need. Right now, you will only guess and do it wrong.



            This might means 2 things:



            • You'll have to change it in the future, because it wasn't like you guessed.


            • Eventually, you won't need that feature and have made it complex unnecesarily.






            share|improve this answer





















              Your Answer




              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ifUsing("editor", function ()
              StackExchange.using("externalEditor", function ()
              StackExchange.using("snippets", function ()
              StackExchange.snippets.init();
              );
              );
              , "code-snippets");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "196"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: false,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );








               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f184525%2fopen-closed-principle-in-a-method%23new-answer', 'question_page');

              );

              Post as a guest






























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              2
              down vote













              Before reviewing micro-design we may focus on overall architecture. You already know that in future you will also need to read an Excel file, it may be a good moment to introduce an abstract base class (or an interface) to abstract this detail from the client:



              public abstract class Convertor

              public abstract IEnumerable<Company> ToCompanyModel(string path);



              Note few changes:



              • I'm using Convertor as base class name then you will have, for example, CsvConvertor, no need to repeat any of those information in the method name.

              • I'm returning IEnumerable<Company> instead of List<Company>. The effective type you're using to store the result is an implementation detail, here I picked the most generic one but you may return IList<Company> as well (what if, for example, you will decide to store Company as an hash-table to detect duplicates?)

              How to create the right Convertor instance? I assume you have DI in-place then you may have a factory class which will create the right instance according to file extension (assuming you're sure you will need Excel files). Something similar to:



              public interface IConvertorFactory

              Convertor Create(string path);



              If not then simply use DI to get the converter instance without any factory. You now have another problem: testing. Do you want to unit test your converter using physical files? Of course you need to but it's also handy to work with in-memory representations then you may add an overload with StreamReader:



              public abstract class Convertor

              public abstract IEnumerable<Company> ToCompanyModel(string path);
              public abstract IEnumerable<Company> ToCompanyModel(StreamReader stream);



              Default implementation may simply open the stream reading line-by-line and you won't need to worry about file size because it won't read everything in memory. Another benefit is that you can check for reader logic without mixing with I/O logic (and this is good to write extensive testing).




              ICompanyRepository companyRepository has no reason to be static, you just make your class not thread-safe without any other benefit.



              You're not doing any error handling. Things may go wrong and caller will get unknown exception. You'd better handle possible errors and return a single well-known exception (let's say InvalidDataException) with all the required details. Do not forget to document exceptions you may throw. You may even decide to ignore errors in one row and continue processing.



              You're splitting each line with row.Split() multiple times, it's a waste: do it once.



              To reduce nesting you can use continue and filters (example, again, without error handling):



              foreach (string row in csvLines.Where(x => !String.IsNullOrEmpty(x))

              var fields = row.Split(',');

              var externalId = fields[0];

              if (companyRepository.CompanyExist(externalId))
              continue;

              // ...



              However the truth is that you do not need to do CSV parsing by hand. In Microsoft.VisualBasic assembly (if you do not want to use an external library) you already have a well-tested and complete implementation:



              using (var parser = new TextFieldParser(new StringReader(stream))) 
              {
              parser.SetDelimiters(new string "," );

              // Skip header
              parser.ReadLine();

              while (!parser.EndOfData)

              var fields = parser.ReadFields();

              // Same as before



              Note that you can catch MalformedLineException to handle errors and you can specify exactly the expected type of each field (if you want to). Also text fields may be enclosed in quotes, use TextFieldParser.HasFieldsEnclosedInQuotes property to instruct the parser. Blank lines are, by default, skipped and you may even use comments (see TextFieldParser.CommentTokens property).



              Very last note: if CSV file is generated by Excel then be aware that it won't always use the comma as delimiter but the current list separator character (see CultureInfo.TextInfo.ListSeparator).






              share|improve this answer



























                up vote
                2
                down vote













                Before reviewing micro-design we may focus on overall architecture. You already know that in future you will also need to read an Excel file, it may be a good moment to introduce an abstract base class (or an interface) to abstract this detail from the client:



                public abstract class Convertor

                public abstract IEnumerable<Company> ToCompanyModel(string path);



                Note few changes:



                • I'm using Convertor as base class name then you will have, for example, CsvConvertor, no need to repeat any of those information in the method name.

                • I'm returning IEnumerable<Company> instead of List<Company>. The effective type you're using to store the result is an implementation detail, here I picked the most generic one but you may return IList<Company> as well (what if, for example, you will decide to store Company as an hash-table to detect duplicates?)

                How to create the right Convertor instance? I assume you have DI in-place then you may have a factory class which will create the right instance according to file extension (assuming you're sure you will need Excel files). Something similar to:



                public interface IConvertorFactory

                Convertor Create(string path);



                If not then simply use DI to get the converter instance without any factory. You now have another problem: testing. Do you want to unit test your converter using physical files? Of course you need to but it's also handy to work with in-memory representations then you may add an overload with StreamReader:



                public abstract class Convertor

                public abstract IEnumerable<Company> ToCompanyModel(string path);
                public abstract IEnumerable<Company> ToCompanyModel(StreamReader stream);



                Default implementation may simply open the stream reading line-by-line and you won't need to worry about file size because it won't read everything in memory. Another benefit is that you can check for reader logic without mixing with I/O logic (and this is good to write extensive testing).




                ICompanyRepository companyRepository has no reason to be static, you just make your class not thread-safe without any other benefit.



                You're not doing any error handling. Things may go wrong and caller will get unknown exception. You'd better handle possible errors and return a single well-known exception (let's say InvalidDataException) with all the required details. Do not forget to document exceptions you may throw. You may even decide to ignore errors in one row and continue processing.



                You're splitting each line with row.Split() multiple times, it's a waste: do it once.



                To reduce nesting you can use continue and filters (example, again, without error handling):



                foreach (string row in csvLines.Where(x => !String.IsNullOrEmpty(x))

                var fields = row.Split(',');

                var externalId = fields[0];

                if (companyRepository.CompanyExist(externalId))
                continue;

                // ...



                However the truth is that you do not need to do CSV parsing by hand. In Microsoft.VisualBasic assembly (if you do not want to use an external library) you already have a well-tested and complete implementation:



                using (var parser = new TextFieldParser(new StringReader(stream))) 
                {
                parser.SetDelimiters(new string "," );

                // Skip header
                parser.ReadLine();

                while (!parser.EndOfData)

                var fields = parser.ReadFields();

                // Same as before



                Note that you can catch MalformedLineException to handle errors and you can specify exactly the expected type of each field (if you want to). Also text fields may be enclosed in quotes, use TextFieldParser.HasFieldsEnclosedInQuotes property to instruct the parser. Blank lines are, by default, skipped and you may even use comments (see TextFieldParser.CommentTokens property).



                Very last note: if CSV file is generated by Excel then be aware that it won't always use the comma as delimiter but the current list separator character (see CultureInfo.TextInfo.ListSeparator).






                share|improve this answer

























                  up vote
                  2
                  down vote










                  up vote
                  2
                  down vote









                  Before reviewing micro-design we may focus on overall architecture. You already know that in future you will also need to read an Excel file, it may be a good moment to introduce an abstract base class (or an interface) to abstract this detail from the client:



                  public abstract class Convertor

                  public abstract IEnumerable<Company> ToCompanyModel(string path);



                  Note few changes:



                  • I'm using Convertor as base class name then you will have, for example, CsvConvertor, no need to repeat any of those information in the method name.

                  • I'm returning IEnumerable<Company> instead of List<Company>. The effective type you're using to store the result is an implementation detail, here I picked the most generic one but you may return IList<Company> as well (what if, for example, you will decide to store Company as an hash-table to detect duplicates?)

                  How to create the right Convertor instance? I assume you have DI in-place then you may have a factory class which will create the right instance according to file extension (assuming you're sure you will need Excel files). Something similar to:



                  public interface IConvertorFactory

                  Convertor Create(string path);



                  If not then simply use DI to get the converter instance without any factory. You now have another problem: testing. Do you want to unit test your converter using physical files? Of course you need to but it's also handy to work with in-memory representations then you may add an overload with StreamReader:



                  public abstract class Convertor

                  public abstract IEnumerable<Company> ToCompanyModel(string path);
                  public abstract IEnumerable<Company> ToCompanyModel(StreamReader stream);



                  Default implementation may simply open the stream reading line-by-line and you won't need to worry about file size because it won't read everything in memory. Another benefit is that you can check for reader logic without mixing with I/O logic (and this is good to write extensive testing).




                  ICompanyRepository companyRepository has no reason to be static, you just make your class not thread-safe without any other benefit.



                  You're not doing any error handling. Things may go wrong and caller will get unknown exception. You'd better handle possible errors and return a single well-known exception (let's say InvalidDataException) with all the required details. Do not forget to document exceptions you may throw. You may even decide to ignore errors in one row and continue processing.



                  You're splitting each line with row.Split() multiple times, it's a waste: do it once.



                  To reduce nesting you can use continue and filters (example, again, without error handling):



                  foreach (string row in csvLines.Where(x => !String.IsNullOrEmpty(x))

                  var fields = row.Split(',');

                  var externalId = fields[0];

                  if (companyRepository.CompanyExist(externalId))
                  continue;

                  // ...



                  However the truth is that you do not need to do CSV parsing by hand. In Microsoft.VisualBasic assembly (if you do not want to use an external library) you already have a well-tested and complete implementation:



                  using (var parser = new TextFieldParser(new StringReader(stream))) 
                  {
                  parser.SetDelimiters(new string "," );

                  // Skip header
                  parser.ReadLine();

                  while (!parser.EndOfData)

                  var fields = parser.ReadFields();

                  // Same as before



                  Note that you can catch MalformedLineException to handle errors and you can specify exactly the expected type of each field (if you want to). Also text fields may be enclosed in quotes, use TextFieldParser.HasFieldsEnclosedInQuotes property to instruct the parser. Blank lines are, by default, skipped and you may even use comments (see TextFieldParser.CommentTokens property).



                  Very last note: if CSV file is generated by Excel then be aware that it won't always use the comma as delimiter but the current list separator character (see CultureInfo.TextInfo.ListSeparator).






                  share|improve this answer















                  Before reviewing micro-design we may focus on overall architecture. You already know that in future you will also need to read an Excel file, it may be a good moment to introduce an abstract base class (or an interface) to abstract this detail from the client:



                  public abstract class Convertor

                  public abstract IEnumerable<Company> ToCompanyModel(string path);



                  Note few changes:



                  • I'm using Convertor as base class name then you will have, for example, CsvConvertor, no need to repeat any of those information in the method name.

                  • I'm returning IEnumerable<Company> instead of List<Company>. The effective type you're using to store the result is an implementation detail, here I picked the most generic one but you may return IList<Company> as well (what if, for example, you will decide to store Company as an hash-table to detect duplicates?)

                  How to create the right Convertor instance? I assume you have DI in-place then you may have a factory class which will create the right instance according to file extension (assuming you're sure you will need Excel files). Something similar to:



                  public interface IConvertorFactory

                  Convertor Create(string path);



                  If not then simply use DI to get the converter instance without any factory. You now have another problem: testing. Do you want to unit test your converter using physical files? Of course you need to but it's also handy to work with in-memory representations then you may add an overload with StreamReader:



                  public abstract class Convertor

                  public abstract IEnumerable<Company> ToCompanyModel(string path);
                  public abstract IEnumerable<Company> ToCompanyModel(StreamReader stream);



                  Default implementation may simply open the stream reading line-by-line and you won't need to worry about file size because it won't read everything in memory. Another benefit is that you can check for reader logic without mixing with I/O logic (and this is good to write extensive testing).




                  ICompanyRepository companyRepository has no reason to be static, you just make your class not thread-safe without any other benefit.



                  You're not doing any error handling. Things may go wrong and caller will get unknown exception. You'd better handle possible errors and return a single well-known exception (let's say InvalidDataException) with all the required details. Do not forget to document exceptions you may throw. You may even decide to ignore errors in one row and continue processing.



                  You're splitting each line with row.Split() multiple times, it's a waste: do it once.



                  To reduce nesting you can use continue and filters (example, again, without error handling):



                  foreach (string row in csvLines.Where(x => !String.IsNullOrEmpty(x))

                  var fields = row.Split(',');

                  var externalId = fields[0];

                  if (companyRepository.CompanyExist(externalId))
                  continue;

                  // ...



                  However the truth is that you do not need to do CSV parsing by hand. In Microsoft.VisualBasic assembly (if you do not want to use an external library) you already have a well-tested and complete implementation:



                  using (var parser = new TextFieldParser(new StringReader(stream))) 
                  {
                  parser.SetDelimiters(new string "," );

                  // Skip header
                  parser.ReadLine();

                  while (!parser.EndOfData)

                  var fields = parser.ReadFields();

                  // Same as before



                  Note that you can catch MalformedLineException to handle errors and you can specify exactly the expected type of each field (if you want to). Also text fields may be enclosed in quotes, use TextFieldParser.HasFieldsEnclosedInQuotes property to instruct the parser. Blank lines are, by default, skipped and you may even use comments (see TextFieldParser.CommentTokens property).



                  Very last note: if CSV file is generated by Excel then be aware that it won't always use the comma as delimiter but the current list separator character (see CultureInfo.TextInfo.ListSeparator).







                  share|improve this answer















                  share|improve this answer



                  share|improve this answer








                  edited Jan 8 at 9:14


























                  answered Jan 8 at 9:09









                  Adriano Repetti

                  9,44611440




                  9,44611440






















                      up vote
                      -2
                      down vote













                      When you actually need the Excel feature, you'll know exactly what you need. Right now, you will only guess and do it wrong.



                      This might means 2 things:



                      • You'll have to change it in the future, because it wasn't like you guessed.


                      • Eventually, you won't need that feature and have made it complex unnecesarily.






                      share|improve this answer

























                        up vote
                        -2
                        down vote













                        When you actually need the Excel feature, you'll know exactly what you need. Right now, you will only guess and do it wrong.



                        This might means 2 things:



                        • You'll have to change it in the future, because it wasn't like you guessed.


                        • Eventually, you won't need that feature and have made it complex unnecesarily.






                        share|improve this answer























                          up vote
                          -2
                          down vote










                          up vote
                          -2
                          down vote









                          When you actually need the Excel feature, you'll know exactly what you need. Right now, you will only guess and do it wrong.



                          This might means 2 things:



                          • You'll have to change it in the future, because it wasn't like you guessed.


                          • Eventually, you won't need that feature and have made it complex unnecesarily.






                          share|improve this answer













                          When you actually need the Excel feature, you'll know exactly what you need. Right now, you will only guess and do it wrong.



                          This might means 2 things:



                          • You'll have to change it in the future, because it wasn't like you guessed.


                          • Eventually, you won't need that feature and have made it complex unnecesarily.







                          share|improve this answer













                          share|improve this answer



                          share|improve this answer











                          answered Jan 7 at 23:38









                          A Bravo Dev

                          539110




                          539110






















                               

                              draft saved


                              draft discarded


























                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f184525%2fopen-closed-principle-in-a-method%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              Popular posts from this blog

                              Greedy Best First Search implementation in Rust

                              Function to Return a JSON Like Objects Using VBA Collections and Arrays

                              C++11 CLH Lock Implementation